Re: [HACKERS] Block level parallel vacuum

2019-10-24 Thread Dilip Kumar
On Thu, Oct 24, 2019 at 4:21 PM Amit Kapila wrote: > > On Thu, Oct 24, 2019 at 11:51 AM Dilip Kumar wrote: > > > > On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar wrote: > > > > > > On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila > > > wrote: > >

Re: [HACKERS] Block level parallel vacuum

2019-10-24 Thread Dilip Kumar
On Thu, Oct 24, 2019 at 8:12 PM Masahiko Sawada wrote: > > On Thu, Oct 24, 2019 at 3:21 PM Dilip Kumar wrote: > > > > On Fri, Oct 18, 2019 at 12:18 PM Dilip Kumar wrote: > > > > > > On Fri, Oct 18, 2019 at 11:25 AM Amit Kapila > > > wrote: > >

Re: [HACKERS] Block level parallel vacuum

2019-10-24 Thread Dilip Kumar
On Fri, Oct 25, 2019 at 10:22 AM Masahiko Sawada wrote: > > On Fri, Oct 25, 2019 at 12:44 PM Dilip Kumar wrote: > > > > On Thu, Oct 24, 2019 at 8:12 PM Masahiko Sawada > > wrote: > > > > > > On Thu, Oct 24, 2019 at 3:21 PM Dilip Kumar wrote: > &

Re: Typos and inconsistencies in code

2019-10-28 Thread Dilip Kumar
uot;btree search" 2. /* copy from .../src/backend/utils/adt/datetime.c - * and changesd struct pg_tm to struct tm + * and changes struct pg_tm to struct tm */ Seems like this comment meant "Changed struct pg_tm to struct tm" -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2019-10-29 Thread Dilip Kumar
On Tue, Oct 29, 2019 at 1:59 PM Masahiko Sawada wrote: > > On Tue, Oct 29, 2019 at 4:06 PM Masahiko Sawada wrote: > > > > On Mon, Oct 28, 2019 at 2:13 PM Dilip Kumar wrote: > > > > > > On Thu, Oct 24, 2019 at 4:33 PM Dilip Kumar wrote: > > > &

Re: [HACKERS] Block level parallel vacuum

2019-10-28 Thread Dilip Kumar
On Tue, Oct 29, 2019 at 10:01 AM Masahiko Sawada wrote: > > On Mon, Oct 28, 2019 at 6:08 PM Dilip Kumar wrote: > > > > On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada > > wrote: > > > > > > On Fri, Oct 25, 2019 at 2:06 PM Dilip Kumar wrote: > &

Re: Questions/Observations related to Gist vacuum

2019-10-16 Thread Dilip Kumar
and expandable, to make multi-pass vacuums even more rare. So I > > don't think we need to jump through many hoops to optimize the multi-pass > > case. > > > > Yeah, that will be a good improvement. +1 -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: Questions/Observations related to Gist vacuum

2019-10-17 Thread Dilip Kumar
On Thu, 17 Oct 2019, 14:59 Amit Kapila, wrote: > On Thu, Oct 17, 2019 at 1:47 PM Dilip Kumar wrote: > > > > On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas > wrote: > > > > > > On 17/10/2019 05:31, Amit Kapila wrote: > > > > > > >

Re: Questions/Observations related to Gist vacuum

2019-10-17 Thread Dilip Kumar
On Thu, Oct 17, 2019 at 12:27 PM Heikki Linnakangas wrote: > > On 17/10/2019 05:31, Amit Kapila wrote: > > On Wed, Oct 16, 2019 at 11:20 AM Dilip Kumar wrote: > >> > >> On Tue, Oct 15, 2019 at 7:13 PM Heikki Linnakangas wrote: > >>> > >>>

Re: [HACKERS] Block level parallel vacuum

2019-10-17 Thread Dilip Kumar
k fine. > > So you meant that all workers share the cost count and if a parallel > vacuum worker increase the cost and it reaches the limit, does the > only one worker sleep? Is that okay even though other parallel workers > are still running and then the sleep might not help? > I agree

Re: Fix of fake unlogged LSN initialization

2019-10-27 Thread Dilip Kumar
On Fri, Oct 25, 2019 at 7:42 AM tsunakawa.ta...@fujitsu.com wrote: > > From: Dilip Kumar > > I have noticed that in StartupXlog also we reset it with 1, you might > > want to fix that as well? > > > > StartupXLOG > > { > > ... > > /* > >

Re: [HACKERS] Block level parallel vacuum

2019-10-27 Thread Dilip Kumar
On Fri, Oct 25, 2019 at 9:19 PM Masahiko Sawada wrote: > > On Fri, Oct 25, 2019 at 2:06 PM Dilip Kumar wrote: > > > > On Fri, Oct 25, 2019 at 10:22 AM Masahiko Sawada > > wrote: > > > > > > For more detail of my idea it is that the first worker who en

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-10-21 Thread Dilip Kumar
On Fri, Oct 18, 2019 at 5:32 PM Amit Kapila wrote: > > On Mon, Oct 14, 2019 at 3:09 PM Dilip Kumar wrote: > > > > On Thu, Oct 3, 2019 at 4:03 AM Tomas Vondra > > wrote: > > > > > > > > > Sure, I wasn't really proposing to adding all s

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-11-19 Thread Dilip Kumar
On Tue, Nov 19, 2019 at 5:23 PM Amit Kapila wrote: > > On Mon, Nov 18, 2019 at 5:02 PM Dilip Kumar wrote: > > > > On Fri, Nov 15, 2019 at 4:19 PM Amit Kapila wrote: > > > > > > On Fri, Nov 15, 2019 at 4:01 PM Dilip Kumar wrote: > > > > >

Fastpath while arranging the changes in LSN order in logical decoding

2019-11-24 Thread Dilip Kumar
In logical decoding, while sending the changes to the output plugin we need to arrange them in the LSN order. But, if there is only one transaction which is a very common case then we can avoid building the binary heap. A small patch is attached for the same. -- Regards, Dilip Kumar

Re: [HACKERS] Block level parallel vacuum

2019-12-02 Thread Dilip Kumar
effect. > If we have 0 missed indexes - parallel vacuum will run as in current > implementation, with leader participation. +1 -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2019-11-04 Thread Dilip Kumar
roach (a) with dividing the cost limit)? Currently the approach (b) > seems better but I'm concerned that it might unnecessarily delay > vacuum if some indexes are very small or bulk-deletions of indexes > does almost nothing such as brin. Are you worried that some of the workers might not have much I/O to do but still we divide the cost limit equally? If that is the case then that is the case with the auto vacuum workers also right? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2019-11-04 Thread Dilip Kumar
On Mon, Nov 4, 2019 at 2:11 PM Masahiko Sawada wrote: > > On Mon, 4 Nov 2019 at 17:26, Dilip Kumar wrote: > > > > On Mon, Nov 4, 2019 at 1:00 PM Masahiko Sawada > > wrote: > > > > > > On Mon, 4 Nov 2019 at 14:02, Amit Kapila wrote: > > > >

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-11-04 Thread Dilip Kumar
30.079) So your result shows that with "streaming on", performance is degrading? By any chance did you try to see where is the bottleneck? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-11-20 Thread Dilip Kumar
On Wed, Nov 20, 2019 at 8:22 PM Dilip Kumar wrote: > > On Wed, Nov 20, 2019 at 11:15 AM Dilip Kumar wrote: > > > > On Tue, Nov 19, 2019 at 5:23 PM Amit Kapila wrote: > > > > > > On Mon, Nov 18, 2019 at 5:02 PM Dilip Kumar wrote: > > > > >

Re: [HACKERS] Block level parallel vacuum

2019-11-20 Thread Dilip Kumar
> > > > We can probably copy the stats in local memory instead of pointing it > to dsm after bulk-deletion, but I think that would unnecessary > overhead and doesn't sound like a good idea. I agree that it will be unnecessary overhead. > > > > BTW, what kind of API change you have in mind for > > > the approach you are suggesting? > > > > I was thinking to add a new API, say LaunchParallelNWorkers(pcxt, n), > > where n is the number of workers the caller wants to launch and should > > be lower than the value in the parallel context. > > > > For that won't you need to duplicate most of the code of > LaunchParallelWorkers or maybe move the entire code in > LaunchParallelNWorkers and then LaunchParallelWorkers can also call > it. Another idea could be to just extend the existing API > LaunchParallelWorkers to take input parameter as the number of > workers, do you see any problem with that or is there a reason you > prefer to write a new API for this? I think we can pass an extra parameter to LaunchParallelWorkers therein we can try to launch min(pcxt->nworkers, n). Or we can put an assert (n <= pcxt->nworkers). -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-11-14 Thread Dilip Kumar
On Thu, Nov 14, 2019 at 12:10 PM Amit Kapila wrote: > > On Thu, Nov 14, 2019 at 9:37 AM Dilip Kumar wrote: > > > > On Wed, Nov 13, 2019 at 5:55 PM Amit Kapila wrote: > > > > > > On Thu, Oct 3, 2019 at 1:18 PM Dilip Kumar wrote: > > > >

Re: cost based vacuum (parallel)

2019-11-11 Thread Dilip Kumar
On Mon, Nov 11, 2019 at 4:23 PM Amit Kapila wrote: > > On Mon, Nov 11, 2019 at 12:59 PM Dilip Kumar wrote: > > > > On Mon, Nov 11, 2019 at 9:43 AM Dilip Kumar wrote: > > > > > > On Fri, Nov 8, 2019 at 11:49 AM Amit Kapila > > > wrote: > >

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-10-10 Thread Dilip Kumar
On Thu, Oct 3, 2019 at 1:18 PM Dilip Kumar wrote: > > I have attempted to test the performance of (Stream + Spill) vs > (Stream + BGW pool) and I can see the similar gain what Alexey had > shown[1]. > > In addition to this, I have rebased the latest patchset [2] without >

Re: maintenance_work_mem used by Vacuum

2019-10-09 Thread Dilip Kumar
ntrols more than one things. If this is > an index AM(GIN) specific issue we might rather want to control the > memory limit of pending list cleanup by a separate GUC parameter like > gin_pending_list_limit, say gin_pending_list_work_mem. And we can > either set the (t

Re: maintenance_work_mem used by Vacuum

2019-10-09 Thread Dilip Kumar
On Wed, Oct 9, 2019 at 2:40 PM Amit Kapila wrote: > > On Wed, Oct 9, 2019 at 2:00 PM Dilip Kumar wrote: > > > > On Wed, Oct 9, 2019 at 10:22 AM Masahiko Sawada > > wrote: > > > > > > On Tue, Oct 8, 2019 at 2:45 PM Amit Kapila > > > wrote: &

Re: [HACKERS] Block level parallel vacuum

2019-10-05 Thread Dilip Kumar
On Fri, Oct 4, 2019 at 3:35 PM Dilip Kumar wrote: > > On Fri, Oct 4, 2019 at 11:01 AM Amit Kapila wrote: > > > > On Fri, Oct 4, 2019 at 10:28 AM Masahiko Sawada > > wrote: > >> > Some more comments.. > 1. > + for (idx = 0; idx <

Re: [HACKERS] Block level parallel vacuum

2019-10-04 Thread Dilip Kumar
s not uniform, previous comment is using 2 space and newly added is using 1 space. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2019-10-03 Thread Dilip Kumar
for for_cleanup is true or false. The only difference is in the error message whether we give index cleanup or index vacuuming otherwise complete code is the same for both the cases. Can't we create some string and based on the value of the for_cleanup and append it in the error message that way we can avoid duplicating this at many places? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-10-13 Thread Dilip Kumar
e, so to make sure the > + * accounting is correct we'll make it look like we're removing the > + * change now (with the old size), and then re-add it at the end. > + */ > + ReorderBufferChangeMemoryUpdate(rb, change, false); > > It is not very clear why this change is required. Basically, this is done at > commit time after which actually we shouldn't attempt to spill these changes. > This is mentioned in comments as well, but it is not clear if that is the > case, then how and when accounting can create a problem. If possible, can > you explain it with an example? > IIUC, we are keeping the track of the memory in ReorderBuffer which is common across the transactions. So even if this transaction is committing and will not spill to dis but we need to keep the memory accounting correct for the future changes in other transactions. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-10-14 Thread Dilip Kumar
-limit-ReorderBuffer. > > > > Sure, I wasn't really proposing to adding all stats from that patch, > including those related to streaming. We need to extract just those > related to spilling. And yes, it needs to be moved right after 0001. > I have extracted the spilling related code to a separate patch on top of 0001. I have also fixed some bugs and review comments and attached as a separate patch. Later I can merge it to the main patch if you agree with the changes. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com 0001-Add-logical_decoding_work_mem-to-limit-ReorderBuffer.patch Description: Binary data bugs_and_review_comments_fix.patch Description: Binary data 0002-Track-statistics-for-spilling.patch Description: Binary data

Re: [HACKERS] Block level parallel vacuum

2019-10-14 Thread Dilip Kumar
e, > then we don't have the problem for gist indexes. > > This is not directly related to this patch, so we can discuss these > observations in a separate thread as well, but before that, I wanted > to check your opinion to see if this makes sense to you as this will > help us in moving this patch forward. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2019-12-18 Thread Dilip Kumar
nhance or > provide the new APIs to support a parallel vacuum if we come across > such a usage. I think we should just modify the comments atop > VACUUM_OPTION_NO_PARALLEL to mention this. I think this should be > good enough for the first version of parallel vacuum considering we > are able to support a parallel vacuum for all in-core indexes. > > Thoughts? +1 -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-12-09 Thread Dilip Kumar
On Tue, Dec 10, 2019 at 9:52 AM Amit Kapila wrote: > > On Mon, Dec 2, 2019 at 2:02 PM Dilip Kumar wrote: > > > > On Sun, Dec 1, 2019 at 7:58 AM Michael Paquier wrote: > > > > > > On Fri, Nov 22, 2019 at 01:18:11PM +0530, Dilip Kumar wrote: > > >

Re: [HACKERS] Block level parallel vacuum

2019-12-03 Thread Dilip Kumar
t be a good idea > for partial indexes. > > > That is, we cannot do parallel vacuum on the table smaller than that > > value. > > > > Yeah, that makes sense, but I feel if we can directly target index > scan size that may be a better option. If we can't use > min_parallel_index_scan_size, then we can consider this. > > -- > With Regards, > Amit Kapila. > EnterpriseDB: http://www.enterprisedb.com -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: Questions/Observations related to Gist vacuum

2019-12-09 Thread Dilip Kumar
new version with a slightly modified commit message. Your changes look fine to me. Thanks! -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Wrong assert in TransactionGroupUpdateXidStatus

2019-12-10 Thread Dilip Kumar
not because that depends upon whether it gets the CLogControlLock or not. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com v1-0001-Change-wrong-assert-while-group-updating-xid-stat.patch Description: Binary data

Re: Parallel copy

2020-02-29 Thread Dilip Kumar
ome/dilipkumar/a.csv' WITH (FORMAT 'csv', HEADER true); truncate table t; create index idx on t(a); create index idx1 on t(c); -- Test CopyFrom and measure with perf: copy t from '/home/dilipkumar/a.csv' WITH (FORMAT 'csv', HEADER true); -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-01 Thread Dilip Kumar
work bandwidth for empty transactions. I have briefly reviewed the patch and it looks good to me. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Dilip Kumar
to me except, we better use parentheses for the variable passed in macro. +#define MAXDEADTUPLES(max_size) \ + ((max_size - offsetof(LVDeadTuples, itemptrs)) / sizeof(ItemPointerData)) change to -> (((max_size) - offsetof(LVDeadTuples, itemptrs)) / sizeof(ItemPointerData)) -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-05 Thread Dilip Kumar
ignore checking these two types of locks > > > because point (a) ensures that those won't lead to deadlock. One idea > > > could be that FindLockCycleRecurseMember just ignores these two types > > > of locks by checking the lock tag. > > > > Thanks Amit for summary.

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-03-03 Thread Dilip Kumar
Child(ctx->reorder, > txid, > XLogRecGetXid(record), > buf.origptr); Make sense. I will change this in the patch. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: Fastpath while arranging the changes in LSN order in logical decoding

2020-03-03 Thread Dilip Kumar
On Wed, Mar 4, 2020 at 3:02 AM David Zhang wrote: > > Hi Dilip, > > I repeated the same test cases again and can't reproduce the > disconnection issue after applied your new patch. Thanks for the confirmation. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-03 Thread Dilip Kumar
ery big transactions then we will have to process them again after the restart. OTOH, if we set based on an interval then even if there is not much work going on, still we end up sending the empty transaction as pointed by Amit. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-03 Thread Dilip Kumar
On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila wrote: > > On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar wrote: > > > > On Wed, Mar 4, 2020 at 9:12 AM Amit Kapila wrote: > > > > > > On Wed, Mar 4, 2020 at 7:17 AM Euler Taveira > > > wrote: > > &

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-03 Thread Dilip Kumar
== NULL)), > > we also check lock->tag and call it a conflict for these two locks. > > c. The deadlock detector can ignore checking these two types of locks > > because point (a) ensures that those won't lead to deadlock. One idea > > could be that FindLockCycleRecurseMemb

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-03-04 Thread Dilip Kumar
d any significant > > overhead. > > > > I have briefly looked at the original patch and it seems the > additional overhead is only when subtransactions are involved, so > ideally, it shouldn't impact default pgbench, but there is no harm in > checking. It might be that we need to build a custom script with > subtransactions involved to measure the impact, but I think it is > worth checking I agree. I will test the same and post the results. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-04 Thread Dilip Kumar
On Wed, Mar 4, 2020 at 3:47 PM Amit Kapila wrote: > > On Wed, Mar 4, 2020 at 11:16 AM Dilip Kumar wrote: > > > > On Wed, Mar 4, 2020 at 10:50 AM Amit Kapila wrote: > > > > > > On Wed, Mar 4, 2020 at 9:52 AM Dilip Kumar wrote: > > > > >

Re: Fastpath while arranging the changes in LSN order in logical decoding

2020-03-02 Thread Dilip Kumar
g on cfbot. > > Do you know when you will be able to supply an updated patch? I will try to send in a day or two. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-02 Thread Dilip Kumar
On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila wrote: > > On Mon, Mar 2, 2020 at 9:01 AM Dilip Kumar wrote: > > > > On Sat, Nov 9, 2019 at 7:29 AM Euler Taveira wrote: > > > > > > Em seg., 21 de out. de 2019 às 21:20, Jeff Janes > > > escreveu: > &g

Re: Fastpath while arranging the changes in LSN order in logical decoding

2020-03-02 Thread Dilip Kumar
On Tue, Mar 3, 2020 at 8:42 AM Dilip Kumar wrote: > > On Mon, Mar 2, 2020 at 7:27 PM David Steele wrote: > > > > Hi Dilip, > > > > On 2/18/20 7:30 PM, David Zhang wrote: > > > After manually applied the patch, a diff regenerated is attached. > > >

Re: Fastpath while arranging the changes in LSN order in logical decoding

2020-03-02 Thread Dilip Kumar
f I did something wrong, and if a new patch is available, I > > can re-run the test on the same environment. Thanks for testing and rebasing. I think one of the hunks is missing in your rebased version. That could be the reason for failure. Can you test on my latest version? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: logical replication empty transactions

2020-03-03 Thread Dilip Kumar
On Tue, Mar 3, 2020 at 1:54 PM Amit Kapila wrote: > > On Tue, Mar 3, 2020 at 9:35 AM Dilip Kumar wrote: > > > > On Mon, Mar 2, 2020 at 4:56 PM Amit Kapila wrote: > > > > > > > > > One thing that is not clear to me is how will we advance rest

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-28 Thread Dilip Kumar
On Tue, Jan 28, 2020 at 1:30 PM Amit Kapila wrote: > > On Tue, Jan 28, 2020 at 11:58 AM Dilip Kumar wrote: > > > > On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila > > wrote: > > > > > > > > It seems to me that we need to add all of this new hand

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-27 Thread Dilip Kumar
On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila wrote: > > On Tue, Jan 28, 2020 at 11:34 AM Dilip Kumar wrote: > > > > On Tue, Jan 28, 2020 at 11:28 AM Amit Kapila > > wrote: > > > > > > On Wed, Jan 22, 2020 at 10:30 AM Dilip Kumar > > > wrot

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-27 Thread Dilip Kumar
On Tue, Jan 28, 2020 at 11:28 AM Amit Kapila wrote: > > On Wed, Jan 22, 2020 at 10:30 AM Dilip Kumar wrote: > > > > On Tue, Jan 14, 2020 at 10:44 AM Amit Kapila > > wrote: > > > > > > > > > Hmm, I think this can turn out to be inefficient b

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-30 Thread Dilip Kumar
On Thu, Jan 30, 2020 at 4:11 PM Amit Kapila wrote: > > On Fri, Jan 10, 2020 at 10:14 AM Dilip Kumar wrote: > > > > On Mon, Jan 6, 2020 at 2:11 PM Amit Kapila wrote: > > > > > > > > > > > Few more comments: > > > ---

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-03 Thread Dilip Kumar
On Mon, Feb 3, 2020 at 9:51 AM Amit Kapila wrote: > > On Fri, Jan 10, 2020 at 10:53 AM Dilip Kumar wrote: > > > > On Mon, Dec 30, 2019 at 3:43 PM Amit Kapila wrote: > > > > > > > > > > 2. During commit time in DecodeCommit we check whether we nee

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-03 Thread Dilip Kumar
On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila wrote: > > On Tue, Jan 28, 2020 at 11:34 AM Dilip Kumar wrote: > > > > On Tue, Jan 28, 2020 at 11:28 AM Amit Kapila > > wrote: > > > > > > On Wed, Jan 22, 2020 at 10:30 AM Dilip Kumar > > > wrot

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-07 Thread Dilip Kumar
On Wed, Feb 5, 2020 at 4:05 PM Amit Kapila wrote: > > On Wed, Feb 5, 2020 at 9:46 AM Dilip Kumar wrote: > > > > Fixed in the latest version sent upthread. > > > > Okay, thanks. I haven't looked at the latest version of patch series > as I was reviewing the

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-04 Thread Dilip Kumar
On Fri, Jan 31, 2020 at 8:08 AM Amit Kapila wrote: > > On Thu, Jan 30, 2020 at 6:10 PM Dilip Kumar wrote: > > > > On Thu, Jan 30, 2020 at 4:11 PM Amit Kapila wrote: > > > > > > Also, if we need to copy the snapshot here, then do we need to again > > >

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-04 Thread Dilip Kumar
On Wed, Feb 5, 2020 at 9:27 AM Amit Kapila wrote: > > On Tue, Feb 4, 2020 at 11:00 AM Dilip Kumar wrote: > > > > On Tue, Jan 28, 2020 at 11:43 AM Amit Kapila > > wrote: > > > > > > > > > One more thing we can do is to identify whether the t

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-02-10 Thread Dilip Kumar
On Mon, Feb 10, 2020 at 1:52 PM Amit Kapila wrote: > > On Fri, Feb 7, 2020 at 4:18 PM Dilip Kumar wrote: > > > > On Wed, Feb 5, 2020 at 4:05 PM Amit Kapila wrote: > > > > > > On Wed, Feb 5, 2020 at 9:46 AM Dilip Kumar wrote: > > > > &

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-02-12 Thread Dilip Kumar
using the "group clear" type of mechanism mainly for two reasons 1) It will unnecessarily make PGPROC structure heavy. 2) For our case, we don't need any specific pieces of information from other waiters, we just need the count. Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-08 Thread Dilip Kumar
On Thu, Jan 9, 2020 at 9:35 AM Amit Kapila wrote: > > On Wed, Jan 8, 2020 at 1:12 PM Dilip Kumar wrote: > > > > I have observed one more design issue. > > > > Good observation. > > > The problem is that when we > > get a toasted chunks we

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-08 Thread Dilip Kumar
On Thu, Jan 9, 2020 at 12:09 PM Amit Kapila wrote: > > On Thu, Jan 9, 2020 at 10:30 AM Dilip Kumar wrote: > > > > On Thu, Jan 9, 2020 at 9:35 AM Amit Kapila wrote: > > > > > > On Wed, Jan 8, 2020 at 1:12 PM Dilip Kumar wrote: > > > &g

Re: [Logical Replication] TRAP: FailedAssertion("rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDE

2020-01-08 Thread Dilip Kumar
->relreplident == REPLICA_IDENTITY_FULL || - rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX); + rel->rd_rel->relreplident == REPLICA_IDENTITY_INDEX || + rel->rd_rel->relreplident == REPLICA_IDENTITY_NOTHING); /* use Oid as relation identifier */ pq_sendint32(out, RelationGetRelid(rel)); -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2020-01-12 Thread Dilip Kumar
> > > > > > > Yeah, we can improve the situation here. I think we don't need to > > > change the value of params->nworkers at first place if allow > > > lazy_scan_heap to take care of this. Also, I think we shouldn't > > > display warning unless the user has explicitly asked for parallel > > > option. See the fix in the attached patch. > > > > Agreed. But with the updated patch the PARALLEL option without the > > parallel degree doesn't display warning because params->nworkers = 0 > > in that case. So how about restoring params->nworkers at the end of > > vacuum_rel()? > > > > I had also thought on those lines, but I was not entirely sure about > this resetting of workers. Today, again thinking about it, it seems > the idea Mahendra is suggesting that is giving an error if the > parallel degree is not specified seems reasonable to me. This means > Vacuum (parallel), Vacuum (parallel) , etc. will give an > error "parallel degree must be specified". This idea has merit as now > we are supporting a parallel vacuum by default, so a 'parallel' option > without a parallel degree doesn't have any meaning. If we do that, > then we don't need to do anything additional about the handling of > temp tables (other than what patch is already doing) as well. What do > you think? > This idea make sense to me. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: Questions/Observations related to Gist vacuum

2020-01-12 Thread Dilip Kumar
and empty leaf pages. They are > + * used for deleting all the empty pages. > */ > After dot, there should be 2 spaces. Earlier, there was 2 spaces. > > Other than that patch looks fine to me. > Thanks for the comment. Amit has already taken care of this before pushing it. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-13 Thread Dilip Kumar
On Thu, Jan 9, 2020 at 12:09 PM Amit Kapila wrote: > > On Thu, Jan 9, 2020 at 10:30 AM Dilip Kumar wrote: > > > > On Thu, Jan 9, 2020 at 9:35 AM Amit Kapila wrote: > > > > > > On Wed, Jan 8, 2020 at 1:12 PM Dilip Kumar wrote: > > > &g

Re: Reorderbuffer crash during recovery

2020-01-15 Thread Dilip Kumar
e decode its commit or abort + * record, but we never see those records for crashed transactions. To + * ensure cleanup of these transactions, set final_lsn to that of their + * last change; this causes ReorderBufferRestoreCleanup to do the right + * thing. Final_lsn would have been set with commit_lsn earlier when we + * decode it commit, no need to update in that case + */ + if (txn->final_lsn < change->lsn) + txn->final_lsn = change->lsn; /decode it commit,/decode its commit, -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-21 Thread Dilip Kumar
.org/message-id/CAFiTN-t8PmKA1X4jEqKmkvs0ggWpy0APWpPuaJwpx2YpfAf97w%40mail.gmail.com -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: Reorderbuffer crash during recovery

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 7:42 AM vignesh C wrote: > > On Thu, Jan 16, 2020 at 9:17 AM Dilip Kumar wrote: > > > > One minor comment. Otherwise, the patch looks fine to me. > > + /* > > + * We set final_lsn on a transaction when we decode its commit or abort > >

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
ost_balance and then uninitialize if nworkers_launched is 0. I am not sure why do we need to initialize VacuumSharedCostBalance here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance, VacuumCostBalance);? I think we can initialize it only if nworkers_launched > 0 then we can get rid of the else branch completely. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar wrote: > > On Thu, Jan 16, 2020 at 5:34 PM Amit Kapila wrote: > > > > On Thu, Jan 16, 2020 at 4:46 PM Masahiko Sawada > > wrote: > > > > > > Right. Most indexes (all?) of tables that are used in t

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 10:44 AM Amit Kapila wrote: > > On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar wrote: > > > > I have few small comments. > > > > 1. > > logical streaming for large in-progress transactions+ > > + /* Can't perform vacuum in p

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 11:34 AM Amit Kapila wrote: > > On Fri, Jan 17, 2020 at 11:00 AM Dilip Kumar wrote: > > > > On Fri, Jan 17, 2020 at 10:44 AM Amit Kapila > > wrote: > > > > > > On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar wrote:

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 11:39 AM Dilip Kumar wrote: I have performed cost delay testing on the latest test(I have used same script as attahced in [1] and [2]. vacuum_cost_delay = 10 vacuum_cost_limit = 2000 Observation: As we have concluded earlier, the delay time is in sync with the I/O

Re: [Logical Replication] TRAP: FailedAssertion("rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDE

2020-01-09 Thread Dilip Kumar
On Thu, 9 Jan 2020 at 10:43 PM, Andres Freund wrote: > Hi, > > On 2020-01-09 13:17:59 +0530, Dilip Kumar wrote: > > I am able to reproduce the failure, I think the assert in the > > 'logicalrep_write_insert' is not correct. IMHO even if the replica > > identity

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-09 Thread Dilip Kumar
On Mon, Dec 30, 2019 at 3:43 PM Amit Kapila wrote: > > On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar wrote: > > > > I have observed some more issues > > > > 1. Currently, In ReorderBufferCommit, it is always expected that > > whenever we get REORDER_BUFFER_C

Re: [Logical Replication] TRAP: FailedAssertion("rel->rd_rel->relreplident == REPLICA_IDENTITY_DEFAULT || rel->rd_rel->relreplident == REPLICA_IDENTITY_FULL || rel->rd_rel->relreplident == REPLICA_IDE

2020-01-09 Thread Dilip Kumar
On Fri, Jan 10, 2020 at 10:31 AM Michael Paquier wrote: > > On Fri, Jan 10, 2020 at 07:30:34AM +0530, Dilip Kumar wrote: > > On Thu, 9 Jan 2020 at 10:43 PM, Andres Freund wrote: > >> There's not much point in having this assert, right? Given that it > >> covers all

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-09 Thread Dilip Kumar
On Sat, Jan 4, 2020 at 4:07 PM Amit Kapila wrote: > > On Mon, Dec 30, 2019 at 3:11 PM Dilip Kumar wrote: > > > > On Thu, Dec 12, 2019 at 9:44 AM Dilip Kumar wrote: > > > > 0002-Issue-individual-in

Re: parallel vacuum options/syntax

2020-01-02 Thread Dilip Kumar
ved by setting vacuum_cost_delay = 0 which is a valid > argument, but OTOH, providing an option to the user which can make his life > easier is not a bad idea either. I agree that there is already an option to run it without cost delay but there is no harm in providing extra power to t

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-03 Thread Dilip Kumar
On Sat, Jan 4, 2020 at 10:00 AM Amit Kapila wrote: > > On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar wrote: > > On Sat, Dec 28, 2019 at 9:33 PM Tomas Vondra > > wrote: > > +static void > > +set_schema_sent_in_streamed_txn(RelationSyncEntry

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-01 Thread Dilip Kumar
On Mon, Dec 30, 2019 at 3:43 PM Amit Kapila wrote: > > On Sun, Dec 29, 2019 at 1:34 PM Dilip Kumar wrote: > > > > I have observed some more issues > > > > 1. Currently, In ReorderBufferCommit, it is always expected that > > whenever we get REORDER_BUFFER_C

Re: [HACKERS] Block level parallel vacuum

2020-01-02 Thread Dilip Kumar
ith the approach of separate booleans, but later found that > this is a better way as it was easier to set and check the different > parallel options for a parallel vacuum. I think we can go back to > the individual booleans if we want but I am not sure if that is a > better

Re: [HACKERS] Block level parallel vacuum

2020-01-02 Thread Dilip Kumar
e workers for an index, we might need another > variable unless we can allow it for all types of indexes. This was my > point that having multiple variables for the purpose of a parallel > vacuum (for indexes) doesn't sound like a better approach than having > a single uint8 variable. > > > If we don't stick to have only > > booleans the having a ternary value for cleanup might be > > understandable but I'm not sure it's better to have it for only vacuum > > purpose. > > > > If we want to keep the possibility of extending it for other purposes, > then we can probably rename it to amoptions or something like that. > What do you think? I think it makes more sense to just keep it for the purpose of enabling/disabling parallelism in different phases. I am not sure that adding more options (which are not related to enabling parallelism in vacuum phases) to the same variable will make sense. So I think the current name is good for its purpose. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2019-12-29 Thread Dilip Kumar
On Sat, Dec 28, 2019 at 9:33 PM Tomas Vondra wrote: > > On Tue, Dec 10, 2019 at 10:23:19AM +0530, Dilip Kumar wrote: > >On Tue, Dec 10, 2019 at 9:52 AM Amit Kapila wrote: > >> > >> On Mon, Dec 2, 2019 at 2:02 PM Dilip Kumar wrote: > >> > > >&g

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-05 Thread Dilip Kumar
On Sat, Jan 4, 2020 at 4:07 PM Amit Kapila wrote: > > On Mon, Dec 30, 2019 at 3:11 PM Dilip Kumar wrote: > > > > On Thu, Dec 12, 2019 at 9:44 AM Dilip Kumar wrote: > > > > > Yesterday, Tomas has posted the latest version of the patch set which > &g

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-06 Thread Dilip Kumar
On Mon, Jan 6, 2020 at 4:36 PM Amit Kapila wrote: > > On Mon, Jan 6, 2020 at 3:56 PM Dilip Kumar wrote: > > > > On Mon, Jan 6, 2020 at 2:11 PM Amit Kapila wrote: > > > > > > 3. > > > +static void > > > +ReorderBu

Re: adding partitioned tables to publications

2020-01-06 Thread Dilip Kumar
r table on publisher node into a partitioned > table of the same name on subscriber node (as the earlier patches > did), because 0001 doesn't implement tuple routing support that would > be needed to apply such changes. > > Attached updated patches. > I am planning to review this patch. Currently, it is not applying on the head so can you rebase it? -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-06 Thread Dilip Kumar
On Mon, Jan 6, 2020 at 2:11 PM Amit Kapila wrote: > > On Mon, Jan 6, 2020 at 9:21 AM Dilip Kumar wrote: > > > > On Sat, Jan 4, 2020 at 4:07 PM Amit Kapila wrote: > > > > > > > > > It is better to merge it with the main patch for > > >

Re: bitmaps and correlation

2020-01-06 Thread Dilip Kumar
N is for the perfectly correlated case (csquared=1) */ + pages_fetchedMIN = ceil(indexSelectivity * (double) baserel->pages); + + pages_fetched = pages_fetchedMAX + indexCorrelation*indexCorrelation*(pages_fetchedMIN - pages_fetchedMAX); -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

Re: PATCH: logical_work_mem and logical streaming of large in-progress transactions

2020-01-07 Thread Dilip Kumar
On Mon, Jan 6, 2020 at 4:44 PM Dilip Kumar wrote: > > On Mon, Jan 6, 2020 at 4:36 PM Amit Kapila wrote: > > > > On Mon, Jan 6, 2020 at 3:56 PM Dilip Kumar wrote: > > > > > > On Mon, Jan 6, 2020 at 2:11 PM Amit Kapila > > > w

Re: Fastpath while arranging the changes in LSN order in logical decoding

2020-01-08 Thread Dilip Kumar
On Wed, 8 Jan 2020 at 5:28 PM, Heikki Linnakangas wrote: > On 25/11/2019 05:52, Dilip Kumar wrote: > > In logical decoding, while sending the changes to the output plugin we > > need to arrange them in the LSN order. But, if there is only one > > transaction which is a ver

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-12 Thread Dilip Kumar
On Wed, Mar 11, 2020 at 2:36 PM Amit Kapila wrote: > > On Tue, Mar 10, 2020 at 6:39 PM Robert Haas wrote: > > > > On Fri, Mar 6, 2020 at 11:27 PM Dilip Kumar wrote: > > > I think instead of the flag we need to keep the counter because we can > > > acqu

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-09 Thread Dilip Kumar
d the error case. In addition to that prepared a WIP patch for handling the PageLock. First, I thought that we can use the same counter for the PageLock and the RelationExtensionLock because in assert we just need to check whether we are trying to acquire any other heavyweight lock while holding any

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-11 Thread Dilip Kumar
On Wed, Mar 11, 2020 at 2:23 PM Amit Kapila wrote: > > On Tue, Mar 10, 2020 at 8:53 AM Dilip Kumar wrote: > > > > Please find the updated patch (summary of the changes) > > - Instead of searching the lock hash table for assert, it maintains a > > counter. > &g

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-11 Thread Dilip Kumar
On Wed, Mar 11, 2020 at 2:23 PM Amit Kapila wrote: > > On Tue, Mar 10, 2020 at 8:53 AM Dilip Kumar wrote: > > > > Please find the updated patch (summary of the changes) > > - Instead of searching the lock hash table for assert, it maintains a > > counter. > &g

Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager

2020-03-11 Thread Dilip Kumar
will require us to > take a relation extension lock. Now, unlike relation extension lock, > after acquiring page lock, we do acquire another heavy-weight lock > (relation extension lock), but as we never acquire it in reverse > order, this is safe. > > So, as per this analysis, we can add Asserts for relation extension > and page locks which will indicate that they won't participate in > deadlocks. It would be good if someone else can also do independent > analysis and verify my findings. I have also analyzed the usage for the RelationExtensioLock and the PageLock. And, my findings are on the same lines. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com

<    1   2   3   4   5   6   7   8   9   10   >