On 13 September 2016 at 20:20, Robert Haas wrote:
> On Mon, Aug 29, 2016 at 4:08 AM, Kyotaro HORIGUCHI
> wrote:
> > [ new patches ]
>
> +/*
> + * We assume that few nodes are async-aware and async-unaware
> + * nodes cannot be revserse-dispatched from lower no
state recursion
issue by explicitly making sure the same plan state does not get
called again while it is already executing.
Thanks
-Amit Khandekar
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 30 September 2017 at 01:26, Robert Haas wrote:
> On Fri, Sep 29, 2017 at 3:53 PM, Robert Haas wrote:
>> On Fri, Sep 22, 2017 at 1:57 AM, Amit Khandekar
>> wrote:
>>> The patch for the above change is :
>>> 0002-Prevent-a-redundant-ConvertRowtypeExpr-node.
On 30 September 2017 at 19:21, Amit Kapila wrote:
> On Wed, Sep 20, 2017 at 10:59 AM, Amit Khandekar
> wrote:
>> On 16 September 2017 at 10:42, Amit Kapila wrote:
>>>
>>> At a broader level, the idea is good, but I think it won't turn out
>>> exactl
On 6 October 2017 at 08:49, Amit Kapila wrote:
> On Thu, Oct 5, 2017 at 4:11 PM, Amit Khandekar wrote:
>>
>> Ok. How about removing pa_all_partial_subpaths altogether , and
>> instead of the below condition :
>>
>> /*
>> * If all the child rels have p
On 9 October 2017 at 16:03, Amit Kapila wrote:
> On Fri, Oct 6, 2017 at 12:03 PM, Amit Khandekar
> wrote:
>> On 6 October 2017 at 08:49, Amit Kapila wrote:
>>>
>>> Okay, but why not cheapest partial path?
>>
>> I gave some thought on this point.
hack. I'm not sure whether
this case ever arises currently, but the pending patch for update
tuple routing will cause it to arise.
Amit Khandekar
Discussion:
http://postgr.es/m/caj3gd9cazfppe7-wwubabpcq4_0subkipfd1+0r5_dkvnwo...@mail.gmail.com
Tom Lane wrote:
> Robert Haa
On 13 October 2017 at 00:29, Robert Haas wrote:
> On Wed, Oct 11, 2017 at 8:51 AM, Amit Khandekar
> wrote:
>> [ new patch ]
>
> + parallel_append
> + Waiting to choose the next subplan during Parallel Append
> plan
> + execution.
> +
but we are adding a new int[]
array that maps subplans to leaf partitions. Will get back with how it
looks finally.
Robert, Amit , I will get back with your other review comments.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing
patch and check if the test passes ? The patch
is to be applied on the main v22 patch. If the test passes, I will
include these changes (also for list_parted) in the upcoming v23
patch.
Thanks
-Amit Khandekar
regress_locale_changes.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 9 November 2017 at 09:27, Thomas Munro wrote:
> On Wed, Nov 8, 2017 at 5:57 PM, Amit Khandekar wrote:
>> On 8 November 2017 at 07:55, Thomas Munro
>> wrote:
>>> On Tue, Nov 7, 2017 at 8:03 AM, Robert Haas wrote:
>>>> The changes to trigger.c stil
Thanks a lot Robert for the patch. I will have a look. Quickly tried
to test some aggregate queries with a partitioned pgbench_accounts
table, and it is crashing. Will get back with the fix, and any other
review comments.
Thanks
-Amit Khandekar
On 9 November 2017 at 23:44, Robert Haas wrote
On 4 October 2016 at 02:30, Robert Haas wrote:
> On Wed, Sep 28, 2016 at 12:30 AM, Amit Khandekar
> wrote:
>> On 24 September 2016 at 06:39, Robert Haas wrote:
>>> Since Kyotaro Horiguchi found that my previous design had a
>>> system-wide performance impact due
On 19 October 2016 at 09:47, Dilip Kumar wrote:
> On Tue, Oct 18, 2016 at 1:45 AM, Andres Freund wrote:
>> I don't quite understand why the bitmap has to be parallel at all. As
>> far as I understand your approach as described here, the only thing that
>> needs to be shared are the iteration arra
On 16 February 2017 at 20:37, Robert Haas wrote:
> On Thu, Feb 16, 2017 at 1:34 AM, Amit Khandekar
> wrote:
>>> What I was thinking about is something like this:
>>>
>>> 1. First, take the maximum parallel_workers value from among all the
>>>
Ashutosh Bapat wrote:
> Do we have any performance measurements where we see that Goal B
> performs better than Goal A, in such a situation? Do we have any
> performance measurement comparing these two approaches in other
> situations. If implementation for Goal B beats that of Goal A always,
> we
On 16 February 2017 at 20:37, Robert Haas wrote:
> I'm not sure that it's going to be useful to make this logic very
> complicated. I think the most important thing is to give 1 worker to
> each plan before we give a second worker to any plan. In general I
> think it's sufficient to assign a wo
On 16 February 2017 at 20:53, Robert Haas wrote:
> On Thu, Feb 16, 2017 at 5:47 AM, Greg Stark wrote:
>> On 13 February 2017 at 12:01, Amit Khandekar wrote:
>>> There are a few things that can be discussed about :
>>
>> If you do a normal update the new tuple
uring execution
phase only once for the very first time we find the update requires
row movement, then we can re-use the info.
One more thing I noticed is that, in case of update-returning, the
ExecDelete() will also generate result of RETURNING, which we are
discarding. So this is a waste. We shou
On 19 February 2017 at 14:59, Robert Haas wrote:
> On Fri, Feb 17, 2017 at 2:56 PM, Amit Khandekar
> wrote:
>> The log2(num_children)+1 formula which you proposed does not take into
>> account the number of workers for each of the subplans, that's why I
>> am a
t to -1 instead, so that no other workers
* run it.
*/
if (min_whichplan != PA_INVALID_PLAN)
{
if (bms_is_member(min_whichplan,
((Append*)state->ps.plan)->partial_subplans_set))
padesc->pa_info[min_whichplan].pa_num_workers++;
else
padesc->pa_info[min_whichpl
ppose worker
on plan 2 finishes. It should not again take plan 2, even though
next_plan points to 2. It should take plan 3, or whichever is not
finished. May be a worker that finishes a plan should do this check
before directly going to the next_plan. But if this is turning out as
simple as the
k this, even if there are other workers
already executing it.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
After giving more thought to our discussions, I have have used the
Bitmapset structure in AppendPath as against having two lists one for
partial and other for non-partial paths. Attached is the patch v6 that
has the required changes. So accumulate_append_subpath() now also
prepares the bitmapset co
On 10 March 2017 at 22:08, Robert Haas wrote:
> On Fri, Mar 10, 2017 at 12:17 AM, Amit Khandekar
> wrote:
>> I agree that the two-lists approach will consume less memory than
>> bitmapset. Keeping two lists will effectively have an extra pointer
>> field which will add u
he number of currently running workers,
> the max. number of workers to be expected, the new worker, the list of
> plans still todo, and then schedules that single worker to one of these
> plans by strategy X.
>
> That would make it easier to swap out X for Y and see how it fares,
>
On 12 March 2017 at 08:50, Robert Haas wrote:
>> However, Ashutosh's response made me think of something: one thing is
>> that we probably do want to group all of the non-partial plans at the
>> beginning of the Append so that they get workers first, and put the
>> partial plans afterward. That's
) as part of row-movement, they perform just
> the core part and leave the rest to be done by ExecUpdate() itself.
Yes, if we decide to execute only the core insert/delete operations
and skip the triggers, then there is a compelling reason to have
something like ExecDeleteInternal() and ExecI
On 17 March 2017 at 01:37, Robert Haas wrote:
> - You've added a GUC (which is good) but not documented it (which is
> bad) or added it to postgresql.conf.sample (also bad).
>
> - You've used a loop inside a spinlock-protected critical section,
> which is against project policy. Use an LWLock; de
On 16 March 2017 at 18:18, Ashutosh Bapat
wrote:
> + * Check if we are already finished plans from parallel append. This
> + * can happen if all the subplans are finished when this worker
> + * has not even started returning tuples.
> + */
> +if (node->as_pa
minimum of 10, 06, 08), the times remaining are :
4 0 0 2 3 1
After 2 units (minimum of 4, 2, 3), the times remaining are :
2 0 0 0 1 1
After 1 units (minimum of 2, 1, 1), the times remaining are :
1 0 0 0 0 0
After 1 units (minimum of 1, 0 , 0), the times remaining are :
0 0 0 0 0 0
Now add
On 13 June 2014 14:10, Abhijit Menon-Sen wrote:
> nbtxlog.c:btree_xlog_vacuum() contains the following comment:
>
> * XXX we don't actually need to read the block, we just need to
> * confirm it is unpinned. If we had a special call into the
> * buffer manager we could optimise thi
On 3 July 2014 16:59, Simon Riggs wrote:
>
> I think we should say this though
>
> LockBufHdr(buf);
> valid = ((buf->flags & BM_VALID) != 0);
> if (valid)
> PinBuffer_Locked(buf);
> else
> UnlockBufHdr(buf);
>
> since otherwise we would access the buffer flags without the spinlock
> and w
On 4 July 2014 19:11, Abhijit Menon-Sen wrote:
> Updated patch attached, thanks.
>
> Amit: what's your conclusion from the review?
>
Other than some minor comments as mentioned below, I don't have any more
issues, it looks all good.
XLogLockBlockRangeForCleanup() function header comments has th
On 21 June 2014 23:36, Kevin Grittner wrote:
> Kevin Grittner wrote:
> I didn't change the tuplestores to TID because it seemed to me that
> it would preclude using transition relations with FDW triggers, and
> it seemed bad not to support that. Does anyone see a way around
> that, or feel that
On 7 August 2014 19:49, Kevin Grittner wrote:
> Amit Khandekar wrote:
>> On 21 June 2014 23:36, Kevin Grittner wrote:
>>> Kevin Grittner wrote:
>>> I didn't change the tuplestores to TID because it seemed to me that
>>> it would preclude using transitio
On 12 August 2014 20:09, Kevin Grittner wrote:
> Amit Khandekar wrote:
>> On 7 August 2014 19:49, Kevin Grittner wrote:
>>> Amit Khandekar wrote:
>
>>>> I tried to google some SQLs that use REFERENCING clause with triggers.
>>>> It looks like in s
>> The execution level
>> itself was almost trivial; it's getting the tuplestore reference
>> through the parse analysis and planning phases that is painful for
>> me.
> I am not sure why you think we would need to refer the tuplestore in
> the parse analysis and planner phases. It seems that we wo
On 15 August 2014 04:04, Kevin Grittner wrote:
> Amit Khandekar wrote:
>
>>>> The execution level itself was almost trivial; it's getting the
>>>> tuplestore reference through the parse analysis and planning
>>>> phases that is painful for me.
&
option.
And option 1 is also an approximation but we would like to have a
better approximation. So wanted to clear my queries regarding option
3.
--
Details about all the remaining changes in updated patch are below ...
On 20 March 2017 at 17:29, Robert Haas wrote:
> On Fri, Mar 17,
On 17 March 2017 at 16:07, Amit Khandekar wrote:
> On 6 March 2017 at 15:11, Amit Langote wrote:
>>
>>>> But that starts to sound less attractive when one realizes that
>>>> that will occur for every row that wants to move.
>>>
>>> If we man
On 23 March 2017 at 05:55, Robert Haas wrote:
> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar
> wrote:
>> Attached is the updated patch that handles the changes for all the
>> comments except the cost changes part. Details about the specific
>> changes are after
On 23 March 2017 at 16:26, Amit Khandekar wrote:
> On 23 March 2017 at 05:55, Robert Haas wrote:
>> On Wed, Mar 22, 2017 at 4:49 AM, Amit Khandekar
wrote:
>>> Attached is the updated patch that handles the changes for all the
>>> comments except the cost changes part.
On 24 March 2017 at 13:11, Rajkumar Raghuwanshi
wrote:
> I have given patch on latest pg sources (on commit
> 457a4448732881b5008f7a3bcca76fc299075ac3). configure and make all
> install ran successfully, but initdb failed with below error.
> FailedAssertion("!(LWLockTranchesAllocated >=
> LWTRANC
On 24 March 2017 at 00:38, Amit Khandekar wrote:
> On 23 March 2017 at 16:26, Amit Khandekar wrote:
>> On 23 March 2017 at 05:55, Robert Haas wrote:
>>>
>>> So in your example we do this:
>>>
>>> C[0] += 20;
>>> C[1] += 16;
>>> C[2
redundantly, which I am yet to handle this.
On 23 March 2017 at 07:04, Amit Langote wrote:
> Hi Amit,
>
> Thanks for the updated patch.
>
> On 2017/03/23 3:09, Amit Khandekar wrote:
>> Attached is v2 patch which implements the above optimization.
>
> Would it be better to have at
On 25 March 2017 at 01:34, Amit Khandekar wrote:
I am yet to handle all of your comments, but meanwhile , attached is
> an updated patch, that handles RETURNING.
>
> Earlier it was not working because ExecInsert() did not return any
> RETURNING clause. This is because the setup need
On 27 March 2017 at 13:05, Amit Khandekar wrote:
>> Also, there are a few places in the documentation mentioning that such
>> updates cause error,
>> which will need to be updated. Perhaps also add some explanatory notes
>> about the mechanism (delete+insert), trigge
For some reason, my reply got sent to only Amit Langote instead of
reply-to-all. Below is the mail reply. Thanks Amit Langote for
bringing this to my notice.
On 31 March 2017 at 16:54, Amit Khandekar wrote:
> On 31 March 2017 at 14:04, Amit Langote wrote:
>> On 2017/03/28 19:12, Amit
> parallel append case. Each worker there does the same kind of work, and
> if one of them is behind, it'll just do less. But correct sizing will
> be more important with parallel-append, because with non-partial
> subplans the work is absolutely *not* uniform.
>
> Greeti
On 4 April 2017 at 01:47, Andres Freund wrote:
>> +typedef struct ParallelAppendDescData
>> +{
>> + LWLock pa_lock;/* mutual exclusion to choose
>> next subplan */
>> + int pa_first_plan; /* plan to choose while
>> wrapping around plans */
>>
On 3 April 2017 at 17:13, Amit Langote wrote:
> Hi Amit,
>
> Thanks for updating the patch. Since ddl.sgml got updated on Saturday,
> patch needs a rebase.
Rebased now.
>
>> On 31 March 2017 at 16:54, Amit Khandekar wrote:
>>> On 31 March 2017 at 14:04, Amit Lan
is not valid. But I think till then we should follow some common
strategy we have been following.
BTW all of the above points apply only for non-partial plans. For
partial plans, what we have done in the patch is : Take the highest of
the per-subplan parallel_workers, and make sure tha
On 6 April 2017 at 07:33, Andres Freund wrote:
> On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
>> This is what the earlier versions of my patch had done : just add up
>> per-subplan parallel_workers (1 for non-partial subplan and
>> subpath->parallel_workers for p
On 7 April 2017 at 20:35, Andres Freund wrote:
>> But for costs such as (4, 4, 4, 20 times), the logic would give
>> us 20 workers because we want to finish the Append in 4 time units;
>> and this what we want to avoid when we go with
>> don't-allocate-too-many-workers approach.
>
> I guess,
palloc0(sizeof(TupleConversionMap*) * nplans);
On 15 June 2017 at 23:06, Amit Khandekar wrote:
> On 13 June 2017 at 15:40, Amit Khandekar wrote:
>> While rebasing my patch for the below recent commit, I realized that a
>> similar issue exists for the
) we jump
the current map position for each successive subplan, whereas in my
patch, in ExecInsert() we deduce the position of the right map to be
fetched using the position of the current resultRelInfo in the
mtstate->resultRelInfo[] array. I think your way is more consistent
with the existing c
On 20 June 2017 at 03:46, Robert Haas wrote:
> On Thu, Jun 15, 2017 at 1:36 PM, Amit Khandekar
> wrote:
>> Attached patch v10 fixes the above. In the existing code, where it
>> builds WCO constraints for each leaf partition; with the patch, that
>> code now is applicable
On 21 June 2017 at 00:23, Robert Haas wrote:
> On Tue, Jun 20, 2017 at 2:54 AM, Amit Khandekar
> wrote:
>>> I guess I don't see why it should work like this. In the INSERT case,
>>> we must build withCheckOption objects for each partition because those
>>&g
gt;>> for some reason, the comments don't explain what that reason is.
>>
>> Yep, it's more appropriate to use
>> ModifyTableState->rootResultRelationInfo->ri_RelationDesc somehow. That
>> is, if answer to the question I raised above is positive.
>Fro
On 22 June 2017 at 01:41, Robert Haas wrote:
>>> Second, it will amount to a functional bug if you get a
>>> different answer than the planner did.
>>
>> Actually, the per-leaf WCOs are meant to be executed on the
>> destination partitions where the tuple is moved, while the WCOs
>> belonging to t
On 26 June 2017 at 08:37, Amit Khandekar wrote:
> On 22 June 2017 at 01:41, Robert Haas wrote:
>>>> Second, it will amount to a functional bug if you get a
>>>> different answer than the planner did.
>>>
>>> Actually, the per-leaf WCOs are meant to
On 22 June 2017 at 01:57, Robert Haas wrote:
> On Wed, Jun 21, 2017 at 1:38 PM, Amit Khandekar
> wrote:
>>>> Yep, it's more appropriate to use
>>>> ModifyTableState->rootResultRelationInfo->ri_RelationDesc somehow. That
>>>> is, if answer t
On 29 June 2017 at 07:42, Amit Langote wrote:
> Hi Amit,
>
> On 2017/06/28 20:43, Amit Khandekar wrote:
>> In attached patch v12
>
> The patch no longer applies and fails to compile after the following
> commit was made yesterday:
>
> commit 501ed02cf6f4f60c335
approach above: If we don't find any
updated partition-keys in any of them, well and good. If we do find,
failover to approach 3 : For each of the update resultrels, use the
new rd_partcheckattrs bitmap to know if it uses any of the updated
columns. This would be faster than pulling up at
ely.
find_inheritance_children() needs to return the oids in canonical
order. So in find_inheritance_children () need to re-use part of
RelationBuildPartitionDesc() where it generates those oids in that
order. I am checking this part, and am going to come up with an
approach based on findings.
On 4 July 2017 at 14:48, Amit Khandekar wrote:
> On 4 July 2017 at 14:38, Amit Langote wrote:
>> On 2017/07/04 17:25, Etsuro Fujita wrote:
>>> On 2017/07/03 18:54, Amit Langote wrote:
>>>> On 2017/07/02 20:10, Robert Haas wrote:
>>>>> T
On 4 July 2017 at 15:23, Amit Khandekar wrote:
> On 4 July 2017 at 14:48, Amit Khandekar wrote:
>> On 4 July 2017 at 14:38, Amit Langote wrote:
>>> On 2017/07/04 17:25, Etsuro Fujita wrote:
>>>> On 2017/07/03 18:54, Amit Langote wrote:
>>>>> On 201
On 30 June 2017 at 15:10, Rafia Sabih wrote:
>
> On Tue, Apr 4, 2017 at 12:37 PM, Amit Khandekar
> wrote:
>>
>> Attached is an updated patch v13 that has some comments changed as per
>> your review, and also rebased on latest master.
>
>
> This is not appli
On 5 July 2017 at 15:12, Amit Khandekar wrote:
> Like I mentioned upthread... in expand_inherited_rtentry(), if we
> replace find_all_inheritors() with something else that returns oids in
> canonical order, that will change the order in which children tables
> get locked, which i
ws junk value in one
of the columns of the row in the error message emitted for the
WithCheckOption violation.
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
set_slot_descriptor.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 13 July 2017 at 22:39, Amit Khandekar wrote:
> Attached is a WIP patch (make_resultrels_ordered.patch) that generates
> the result rels in canonical order. This patch is kept separate from
> the update-partition-key patch, and can be applied on master branch.
Attached update-partitio
On 24 July 2017 at 12:11, Amit Langote wrote:
> Hi Amit,
>
> On 2017/07/24 14:09, Amit Khandekar wrote:
>>>> On 2017/07/10 14:15, Etsuro Fujita wrote:
>>>> Another thing I noticed is the error handling in ExecWithCheckOptions; it
>>>> doesn't
On 25 July 2017 at 15:02, Rajkumar Raghuwanshi
wrote:
> On Mon, Jul 24, 2017 at 11:23 AM, Amit Khandekar
> wrote:
>>
>>
>> Attached update-partition-key_v13.patch now contains this
>> make_resultrels_ordered.patch changes.
>>
>
> I have appl
artitionDispatchInfo().
> On another note, did you do anything about the suggestion Thomas made
> in
> http://postgr.es/m/CAEepm=3sc_j1zwqdyrbu4dtfx5rhcamnnuaxrkwzfgt9m23...@mail.gmail.com
> ?
This is still pending on me; plus I think there are some more points.
I need to go over those an
ons (and I think even
RETURNING) can have a subquery.
>
> Thanks,
> Amit
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
not update) while inserting a routed tuple.
Use getASTriggerResultRelInfo() for attrno mapping, rather than first
resultrel, for generating child WCO/RETURNING expression.
Address Robert's review comments on make_resultrel_ordered.patch.
pgindent.
[1]
https://www.postgresql.org/message-id/d86d27e
ence here */
+ if (found_whole_row)
+ elog(ERROR, "unexpected whole-row reference found in
partition key");
Instead of callers of map_partition_varattnos() reporting error, we
can have map_partition_varattnos() itself report error. Instead of the
found_whole_row parameter of map_partition_varattnos(), we can have
error_on_whole_row parameter. So callers who don't expect whole row,
would pass error_on_whole_row=true to map_partition_varattnos(). This
will simplify the resultant code a bit.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2 August 2017 at 14:38, Amit Langote wrote:
> On 2017/07/29 2:45, Amit Khandekar wrote:
>> On 28 July 2017 at 20:10, Robert Haas wrote:
>>> On Wed, Jul 26, 2017 at 2:13 AM, Amit Langote wrote:
>>>> I checked that we get the same result relation order with both
On 2 August 2017 at 11:51, Amit Langote wrote:
> Thanks Fuita-san and Amit for reviewing.
>
> On 2017/08/02 1:33, Amit Khandekar wrote:
>> On 1 August 2017 at 15:11, Etsuro Fujita wrote:
>>> On 2017/07/31 18:56, Amit Langote wrote:
>>>> Yes, that's
On 3 August 2017 at 11:00, Amit Langote wrote:
> Thanks for the review.
>
> On 2017/08/03 13:54, Amit Khandekar wrote:
>> On 2 August 2017 at 11:51, Amit Langote wrote:
>>> On 2017/08/02 1:33, Amit Khandekar wrote:
>>>> Instead of callers of map_partition_vara
>
> Below are the TODOS at this point :
>
> Fix for bug reported by Rajkumar about update with join.
I had explained the root issue of this bug here : [1]
Attached patch includes the fix, which is explained below.
Currently in the patch, there is a check if the tuple is concurrently
deleted by ot
eparing the ModifyTable plan; the PartitionDispatch
data structure returned by RelationGetPartitionDispatchInfo() should
be stored in that plan, and then the execution-time fields in
PartitionDispatch would be populated in ExecInitModifyTable().
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Post
On 9 August 2017 at 19:05, Robert Haas wrote:
> On Wed, Jul 5, 2017 at 7:53 AM, Amit Khandekar wrote:
>>> This is not applicable on the latest head i.e. commit --
>>> 08aed6604de2e6a9f4d499818d7c641cbf5eb9f7, looks like need a rebasing.
>>
>> Thanks for notifying
the partition tree using these descriptors similar to how it
is traversed in RelationGetPartitionDispatchInfo() ? May be to avoid
code duplication for traversing, we can have a common API.
Still looking at RelationGetPartitionDispatchInfo() changes ...
--
Thanks,
-Amit Khandekar
EnterpriseDB Co
t be due to some other reasons. I will investigate this, and the
other queries.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 17 August 2017 at 06:39, Amit Langote wrote:
> Hi Amit,
>
> Thanks for the comments.
>
> On 2017/08/16 20:30, Amit Khandekar wrote:
>> On 16 August 2017 at 11:06, Amit Langote
>> wrote:
>>
>>> Attached updated patches.
>>
>> Thanks Am
o just
generate oids, and keep RelationGetPartitionDispatchInfo() intact, to
be used only for tuple routing.
But, I haven't yet checked Ashuthosh's requirements, which suggest
that it does not help to even get the oid list.
>
> --
> Robert Haas
> EnterpriseDB: http://www.ent
use it's essentially free to create
> while we are walking the partition tree in
> RelationGetPartitionDispatchInfo() and it seems undesirable to make the
> caller compute that information (indexes) by traversing the partition tree
> all over again, if it doesn't otherwise ha
t,
> only the parent's triggers fire.
I would also opt for this behaviour.
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On 2 May 2017 at 18:17, Robert Haas wrote:
> On Tue, Apr 4, 2017 at 7:11 AM, Amit Khandekar wrote:
>> Attached updated patch v7 has the above changes.
>
> This no longer applies. Please rebase.
Thanks Robert for informing about this.
My patch has a separate function for emittin
ave a list
of next few blocks, say :
w1 : 1, 5, 9, 13
w2 : 2, 6, 10, 14
w3 : 3, 7, 11, 15.
w4 : .
May be the leader worker would do the accounting and store the
instructions for each of the workers at individual locations in shared
memory, so there won't be any contention while accessing
On 11 May 2017 at 17:23, Amit Kapila wrote:
> On Fri, Mar 17, 2017 at 4:07 PM, Amit Khandekar
> wrote:
>> On 4 March 2017 at 12:49, Robert Haas wrote:
>>> On Thu, Mar 2, 2017 at 11:53 AM, Amit Khandekar
>>> wrote:
>>>> I think it does not make s
r will occur.
>
> Doesn't this error case indicate that this needs to be integrated with
> Default partition patch of Rahila or that patch needs to take care
> this error case?
> Basically, if there is no matching partition, then move it to default
> partition.
Will have a
On 12 May 2017 at 08:30, Amit Kapila wrote:
> On Thu, May 11, 2017 at 5:41 PM, Amit Khandekar
> wrote:
>> On 11 May 2017 at 17:23, Amit Kapila wrote:
>>> On Fri, Mar 17, 2017 at 4:07 PM, Amit Khandekar
>>> wrote:
>>>> On 4 March 2017 at 12:49, Robert
On 12 May 2017 at 10:01, Amit Kapila wrote:
> On Fri, May 12, 2017 at 9:27 AM, Amit Kapila wrote:
>> On Thu, May 11, 2017 at 5:45 PM, Amit Khandekar
>> wrote:
>>> On 11 May 2017 at 17:24, Amit Kapila wrote:
>>>> Few comments:
>>>> 1.
>>&
non-default partition, it moves into that partition. I
think we can debate on whether the row should stay in the default
partition or move. I think it should be moved, since now the row has a
suitable partition.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
On 12 May 2017 at 14:56, Amit Kapila wrote:
> I think it might be better to summarize all the options discussed
> including what the patch has and see what most people consider as
> sensible.
Yes, makes sense. Here are the options that were discussed so far for
ROW triggers :
Option 1 : (the pat
On 17 May 2017 at 17:29, Rushabh Lathia wrote:
>
>
> On Wed, May 17, 2017 at 12:06 PM, Dilip Kumar wrote:
>>
>> On Fri, May 12, 2017 at 4:17 PM, Amit Khandekar
>> wrote:
>> > Option 3
>> >
>> >
>> > BR, AR delete triggers
not firing update triggers on any of the partitions. So, I prefer
option 2 over option 3 , i.e. make sure to fire BR and AR update
triggers. Actually option 2 is what Robert had proposed in the
beginning.
--
Thanks,
-Amit Khandekar
EnterpriseDB Corporation
The Postgres Database Company
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
1 - 100 of 194 matches
Mail list logo