Re: [HACKERS] Block level parallel vacuum

2020-01-29 Thread Amit Kapila
On Wed, Jan 29, 2020 at 7:20 PM Masahiko Sawada
 wrote:
>
> >
> > Okay, thanks for the review.  Attached is an updated patch. I have
> > additionally run pgindent.  I am planning to commit the attached
> > tomorrow unless I see more comments.
>
> Thank you for committing it!
>

I have marked this patch as committed in CF.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-29 Thread Masahiko Sawada
On Tue, 28 Jan 2020 at 18:47, Amit Kapila  wrote:
>
> On Tue, Jan 28, 2020 at 12:53 PM Mahendra Singh Thalor
>  wrote:
> >
> > > > > > 1.
> > > > > > -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
> > > > > >
> > > > > > I think, "do parallel vacuum" should be modified. Without 
> > > > > > specifying -P, we are still doing parallel vacuum so we can use 
> > > > > > like "degree for parallel vacuum"
> > > > > >
> > > > >
> > > > > I am not sure if 'degree' makes it very clear.  How about "use this
> > > > > many background workers for vacuum, if available"?
> > > >
> > > > If background workers are many, then automatically, we are using 
> > > > them(by default parallel vacuum). This option is to put limit on 
> > > > background workers(limit for vacuum workers) to be used by vacuum 
> > > > process.
> > > >
> > >
> > > I don't think that the option is just to specify the max limit because
> > > that is generally controlled by guc parameters.  This option allows
> > > users to specify the number of workers for the cases where he has more
> > > knowledge about the size/type of indexes.  In some cases, the user
> > > might be able to make a better decision and that was the reason we
> > > have added this option in the first place.
> > >
> > > > So I think, we can use "max parallel vacuum workers (by default, based 
> > > > on no. of indexes)" or "control parallel vacuum workers"
> > > >
> > >
> > > Hmm, I feel what I suggested is better because of the above explanation.
> >
> > Agreed.
> >
>
> Okay, thanks for the review.  Attached is an updated patch. I have
> additionally run pgindent.  I am planning to commit the attached
> tomorrow unless I see more comments.

Thank you for committing it!

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-28 Thread Amit Kapila
On Tue, Jan 28, 2020 at 8:56 AM Masahiko Sawada
 wrote:
>
> On Sat, 25 Jan 2020 at 15:41, Amit Kapila  wrote:
> >
> >
> > I have made few modifications in the patch.
> >
> > 1. I think we should try to block the usage of 'full' and 'parallel'
> > option in the utility rather than allowing the server to return an
> > error.
> > 2. It is better to handle 'P' option in getopt_long in the order of
> > its declaration in long_options array.
> > 3. Added an Assert for server version while handling of parallel option.
> > 4. Added a few sentences in the documentation.
> >
> > What do you guys think of the attached?
>
> Your changes look good me.
>

Thanks for the review.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-28 Thread Amit Kapila
On Tue, Jan 28, 2020 at 12:53 PM Mahendra Singh Thalor
 wrote:
>
> > > > > 1.
> > > > > -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
> > > > >
> > > > > I think, "do parallel vacuum" should be modified. Without specifying 
> > > > > -P, we are still doing parallel vacuum so we can use like "degree for 
> > > > > parallel vacuum"
> > > > >
> > > >
> > > > I am not sure if 'degree' makes it very clear.  How about "use this
> > > > many background workers for vacuum, if available"?
> > >
> > > If background workers are many, then automatically, we are using them(by 
> > > default parallel vacuum). This option is to put limit on background 
> > > workers(limit for vacuum workers) to be used by vacuum process.
> > >
> >
> > I don't think that the option is just to specify the max limit because
> > that is generally controlled by guc parameters.  This option allows
> > users to specify the number of workers for the cases where he has more
> > knowledge about the size/type of indexes.  In some cases, the user
> > might be able to make a better decision and that was the reason we
> > have added this option in the first place.
> >
> > > So I think, we can use "max parallel vacuum workers (by default, based on 
> > > no. of indexes)" or "control parallel vacuum workers"
> > >
> >
> > Hmm, I feel what I suggested is better because of the above explanation.
>
> Agreed.
>

Okay, thanks for the review.  Attached is an updated patch. I have
additionally run pgindent.  I am planning to commit the attached
tomorrow unless I see more comments.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v38-0001-Add-parallel-option-to-vacuumdb-command.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Mahendra Singh Thalor
On Tue, 28 Jan 2020 at 12:32, Amit Kapila  wrote:
>
> On Tue, Jan 28, 2020 at 12:04 PM Mahendra Singh Thalor
>  wrote:
> >
> > On Tue, 28 Jan 2020 at 08:14, Amit Kapila  wrote:
> > >
> > > On Tue, Jan 28, 2020 at 2:13 AM Mahendra Singh Thalor
> > >  wrote:
> > > >
> > > > On Sat, 25 Jan 2020 at 12:11, Amit Kapila  
> > > > wrote:
> > > > >
> > > > > On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
> > > > >  wrote:
> > > > > >
> > > > > > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor 
> > > > > >  wrote:
> > > > > > >
> > > > > > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > > > > > >  wrote:
> > > > > > > >
> > > > > > > > Attached the updated version patch.
> > > > > > >
> > > > > > > Thanks Sawada-san for the re-based patch.
> > > > > > >
> > > > > > > I reviewed and tested this patch.  Patch looks good to me.
> > > > > >
> > > > > > As offline, suggested by Amit Kapila, I verified vacuumdb "-P" 
> > > > > > option
> > > > > > functionality with older versions(<13) and also I tested vacuumdb by
> > > > > > giving "-j" option with "-P". All are working as per expectation 
> > > > > > and I
> > > > > > didn't find any issue with these options.
> > > > > >
> > > > >
> > > > > I have made few modifications in the patch.
> > > > >
> > > > > 1. I think we should try to block the usage of 'full' and 'parallel'
> > > > > option in the utility rather than allowing the server to return an
> > > > > error.
> > > > > 2. It is better to handle 'P' option in getopt_long in the order of
> > > > > its declaration in long_options array.
> > > > > 3. Added an Assert for server version while handling of parallel 
> > > > > option.
> > > > > 4. Added a few sentences in the documentation.
> > > > >
> > > > > What do you guys think of the attached?
> > > > >
> > > >
> > > > I took one more review round.  Below are some review comments:
> > > >
> > > > 1.
> > > > -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
> > > >
> > > > I think, "do parallel vacuum" should be modified. Without specifying 
> > > > -P, we are still doing parallel vacuum so we can use like "degree for 
> > > > parallel vacuum"
> > > >
> > >
> > > I am not sure if 'degree' makes it very clear.  How about "use this
> > > many background workers for vacuum, if available"?
> >
> > If background workers are many, then automatically, we are using them(by 
> > default parallel vacuum). This option is to put limit on background 
> > workers(limit for vacuum workers) to be used by vacuum process.
> >
>
> I don't think that the option is just to specify the max limit because
> that is generally controlled by guc parameters.  This option allows
> users to specify the number of workers for the cases where he has more
> knowledge about the size/type of indexes.  In some cases, the user
> might be able to make a better decision and that was the reason we
> have added this option in the first place.
>
> > So I think, we can use "max parallel vacuum workers (by default, based on 
> > no. of indexes)" or "control parallel vacuum workers"
> >
>
> Hmm, I feel what I suggested is better because of the above explanation.

Agreed.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Amit Kapila
On Tue, Jan 28, 2020 at 12:04 PM Mahendra Singh Thalor
 wrote:
>
> On Tue, 28 Jan 2020 at 08:14, Amit Kapila  wrote:
> >
> > On Tue, Jan 28, 2020 at 2:13 AM Mahendra Singh Thalor
> >  wrote:
> > >
> > > On Sat, 25 Jan 2020 at 12:11, Amit Kapila  wrote:
> > > >
> > > > On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
> > > >  wrote:
> > > > >
> > > > > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor 
> > > > >  wrote:
> > > > > >
> > > > > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > > > > >  wrote:
> > > > > > >
> > > > > > > Attached the updated version patch.
> > > > > >
> > > > > > Thanks Sawada-san for the re-based patch.
> > > > > >
> > > > > > I reviewed and tested this patch.  Patch looks good to me.
> > > > >
> > > > > As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
> > > > > functionality with older versions(<13) and also I tested vacuumdb by
> > > > > giving "-j" option with "-P". All are working as per expectation and I
> > > > > didn't find any issue with these options.
> > > > >
> > > >
> > > > I have made few modifications in the patch.
> > > >
> > > > 1. I think we should try to block the usage of 'full' and 'parallel'
> > > > option in the utility rather than allowing the server to return an
> > > > error.
> > > > 2. It is better to handle 'P' option in getopt_long in the order of
> > > > its declaration in long_options array.
> > > > 3. Added an Assert for server version while handling of parallel option.
> > > > 4. Added a few sentences in the documentation.
> > > >
> > > > What do you guys think of the attached?
> > > >
> > >
> > > I took one more review round.  Below are some review comments:
> > >
> > > 1.
> > > -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
> > >
> > > I think, "do parallel vacuum" should be modified. Without specifying -P, 
> > > we are still doing parallel vacuum so we can use like "degree for 
> > > parallel vacuum"
> > >
> >
> > I am not sure if 'degree' makes it very clear.  How about "use this
> > many background workers for vacuum, if available"?
>
> If background workers are many, then automatically, we are using them(by 
> default parallel vacuum). This option is to put limit on background 
> workers(limit for vacuum workers) to be used by vacuum process.
>

I don't think that the option is just to specify the max limit because
that is generally controlled by guc parameters.  This option allows
users to specify the number of workers for the cases where he has more
knowledge about the size/type of indexes.  In some cases, the user
might be able to make a better decision and that was the reason we
have added this option in the first place.

> So I think, we can use "max parallel vacuum workers (by default, based on no. 
> of indexes)" or "control parallel vacuum workers"
>

Hmm, I feel what I suggested is better because of the above explanation.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Mahendra Singh Thalor
On Tue, 28 Jan 2020 at 08:14, Amit Kapila  wrote:
>
> On Tue, Jan 28, 2020 at 2:13 AM Mahendra Singh Thalor
>  wrote:
> >
> > On Sat, 25 Jan 2020 at 12:11, Amit Kapila 
wrote:
> > >
> > > On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
> > >  wrote:
> > > >
> > > > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor <
mahi6...@gmail.com> wrote:
> > > > >
> > > > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > > > >  wrote:
> > > > > >
> > > > > > Attached the updated version patch.
> > > > >
> > > > > Thanks Sawada-san for the re-based patch.
> > > > >
> > > > > I reviewed and tested this patch.  Patch looks good to me.
> > > >
> > > > As offline, suggested by Amit Kapila, I verified vacuumdb "-P"
option
> > > > functionality with older versions(<13) and also I tested vacuumdb by
> > > > giving "-j" option with "-P". All are working as per expectation
and I
> > > > didn't find any issue with these options.
> > > >
> > >
> > > I have made few modifications in the patch.
> > >
> > > 1. I think we should try to block the usage of 'full' and 'parallel'
> > > option in the utility rather than allowing the server to return an
> > > error.
> > > 2. It is better to handle 'P' option in getopt_long in the order of
> > > its declaration in long_options array.
> > > 3. Added an Assert for server version while handling of parallel
option.
> > > 4. Added a few sentences in the documentation.
> > >
> > > What do you guys think of the attached?
> > >
> >
> > I took one more review round.  Below are some review comments:
> >
> > 1.
> > -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
> >
> > I think, "do parallel vacuum" should be modified. Without specifying
-P, we are still doing parallel vacuum so we can use like "degree for
parallel vacuum"
> >
>
> I am not sure if 'degree' makes it very clear.  How about "use this
> many background workers for vacuum, if available"?

If background workers are many, then automatically, we are using them(by
default parallel vacuum). This option is to put limit on background
workers(limit for vacuum workers) to be used by vacuum process.  So I
think, we can use "max parallel vacuum workers (by default, based on no. of
indexes)" or "control parallel vacuum workers"

>
> > 2. Error message inconsistent for FULL and parallel option:
> > Error for normal vacuum:
> > ERROR:  cannot specify both FULL and PARALLEL options
> >
> > Error for vacuumdb:
> > error: cannot use the "parallel" option when performing full
> >
> > I think, both the places, we should use 2nd error message as it is
giving more clarity.
> >
>
> Which message are you advocating here "cannot use the "parallel"
> option when performing full" or "cannot specify both FULL and PARALLEL
> options"?  The message used in this patch is mainly because of

I mean that "*cannot use the "parallel" option when performing full"*
should be used in both the places.

> consistency with nearby messages in the vacuumdb utility. If you are
> advocating to change "cannot specify both FULL and PARALLEL options"
> to match what we are using in this patch, then it is better to do that
> separately and maybe ask for more opinions.  I think I understand your
> desire to use the same message at both places, but it seems to me the
> messages used in both the places are to maintain consistency with the
> nearby code or the message used for a similar purpose.

Okay. I am agree with your points. Let's keep as it is.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Masahiko Sawada
On Sat, 25 Jan 2020 at 15:41, Amit Kapila  wrote:
>
> On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
>  wrote:
> >
> > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor  
> > wrote:
> > >
> > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > >  wrote:
> > > >
> > > > Attached the updated version patch.
> > >
> > > Thanks Sawada-san for the re-based patch.
> > >
> > > I reviewed and tested this patch.  Patch looks good to me.
> >
> > As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
> > functionality with older versions(<13) and also I tested vacuumdb by
> > giving "-j" option with "-P". All are working as per expectation and I
> > didn't find any issue with these options.
> >
>
> I have made few modifications in the patch.
>
> 1. I think we should try to block the usage of 'full' and 'parallel'
> option in the utility rather than allowing the server to return an
> error.
> 2. It is better to handle 'P' option in getopt_long in the order of
> its declaration in long_options array.
> 3. Added an Assert for server version while handling of parallel option.
> 4. Added a few sentences in the documentation.
>
> What do you guys think of the attached?

Your changes look good me.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Amit Kapila
On Tue, Jan 28, 2020 at 2:13 AM Mahendra Singh Thalor
 wrote:
>
> On Sat, 25 Jan 2020 at 12:11, Amit Kapila  wrote:
> >
> > On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
> >  wrote:
> > >
> > > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor  
> > > wrote:
> > > >
> > > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > > >  wrote:
> > > > >
> > > > > Attached the updated version patch.
> > > >
> > > > Thanks Sawada-san for the re-based patch.
> > > >
> > > > I reviewed and tested this patch.  Patch looks good to me.
> > >
> > > As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
> > > functionality with older versions(<13) and also I tested vacuumdb by
> > > giving "-j" option with "-P". All are working as per expectation and I
> > > didn't find any issue with these options.
> > >
> >
> > I have made few modifications in the patch.
> >
> > 1. I think we should try to block the usage of 'full' and 'parallel'
> > option in the utility rather than allowing the server to return an
> > error.
> > 2. It is better to handle 'P' option in getopt_long in the order of
> > its declaration in long_options array.
> > 3. Added an Assert for server version while handling of parallel option.
> > 4. Added a few sentences in the documentation.
> >
> > What do you guys think of the attached?
> >
>
> I took one more review round.  Below are some review comments:
>
> 1.
> -P, --parallel=PARALLEL_DEGREE  do parallel vacuum
>
> I think, "do parallel vacuum" should be modified. Without specifying -P, we 
> are still doing parallel vacuum so we can use like "degree for parallel 
> vacuum"
>

I am not sure if 'degree' makes it very clear.  How about "use this
many background workers for vacuum, if available"?

> 2. Error message inconsistent for FULL and parallel option:
> Error for normal vacuum:
> ERROR:  cannot specify both FULL and PARALLEL options
>
> Error for vacuumdb:
> error: cannot use the "parallel" option when performing full
>
> I think, both the places, we should use 2nd error message as it is giving 
> more clarity.
>

Which message are you advocating here "cannot use the "parallel"
option when performing full" or "cannot specify both FULL and PARALLEL
options"?  The message used in this patch is mainly because of
consistency with nearby messages in the vacuumdb utility. If you are
advocating to change "cannot specify both FULL and PARALLEL options"
to match what we are using in this patch, then it is better to do that
separately and maybe ask for more opinions.  I think I understand your
desire to use the same message at both places, but it seems to me the
messages used in both the places are to maintain consistency with the
nearby code or the message used for a similar purpose.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-27 Thread Mahendra Singh Thalor
On Sat, 25 Jan 2020 at 12:11, Amit Kapila  wrote:
>
> On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
>  wrote:
> >
> > On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor 
wrote:
> > >
> > > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> > >  wrote:
> > > >
> > > > Attached the updated version patch.
> > >
> > > Thanks Sawada-san for the re-based patch.
> > >
> > > I reviewed and tested this patch.  Patch looks good to me.
> >
> > As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
> > functionality with older versions(<13) and also I tested vacuumdb by
> > giving "-j" option with "-P". All are working as per expectation and I
> > didn't find any issue with these options.
> >
>
> I have made few modifications in the patch.
>
> 1. I think we should try to block the usage of 'full' and 'parallel'
> option in the utility rather than allowing the server to return an
> error.
> 2. It is better to handle 'P' option in getopt_long in the order of
> its declaration in long_options array.
> 3. Added an Assert for server version while handling of parallel option.
> 4. Added a few sentences in the documentation.
>
> What do you guys think of the attached?
>

I took one more review round.  Below are some review comments:

1.
-P, --parallel=PARALLEL_DEGREE  do parallel vacuum

I think, "do parallel vacuum" should be modified. Without specifying -P, we
are still doing parallel vacuum so we can use like "degree for parallel
vacuum"

2. Error message inconsistent for FULL and parallel option:
*Error for normal vacuum:*
ERROR:  cannot specify both FULL and PARALLEL options

*Error for vacuumdb:*
error: cannot use the "parallel" option when performing full

I think, both the places, we should use 2nd error message as it is giving
more clarity.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-24 Thread Amit Kapila
On Fri, Jan 24, 2020 at 4:58 PM Mahendra Singh Thalor
 wrote:
>
> On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor  
> wrote:
> >
> > On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
> >  wrote:
> > >
> > > Attached the updated version patch.
> >
> > Thanks Sawada-san for the re-based patch.
> >
> > I reviewed and tested this patch.  Patch looks good to me.
>
> As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
> functionality with older versions(<13) and also I tested vacuumdb by
> giving "-j" option with "-P". All are working as per expectation and I
> didn't find any issue with these options.
>

I have made few modifications in the patch.

1. I think we should try to block the usage of 'full' and 'parallel'
option in the utility rather than allowing the server to return an
error.
2. It is better to handle 'P' option in getopt_long in the order of
its declaration in long_options array.
3. Added an Assert for server version while handling of parallel option.
4. Added a few sentences in the documentation.

What do you guys think of the attached?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v37-0001-Add-parallel-option-to-vacuumdb-command.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-24 Thread Mahendra Singh Thalor
On Thu, 23 Jan 2020 at 15:32, Mahendra Singh Thalor  wrote:
>
> On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
>  wrote:
> >
> > On Wed, 22 Jan 2020 at 11:23, Amit Kapila  wrote:
> > >
> > > On Wed, Jan 22, 2020 at 7:14 AM Masahiko Sawada
> > >  wrote:
> > > >
> > > > Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
> > > > what I did in the previous version patch.
> > > >
> > >
> > > Would you like to resubmit your vacuumdb utility patch for this
> > > enhancement?  I see some old version of it and it seems to me that you
> > > need to update that patch.
> > >
> > > + if (optarg != NULL)
> > > + {
> > > + parallel_workers = atoi(optarg);
> > > + if (parallel_workers <= 0)
> > > + {
> > > + pg_log_error("number of parallel workers must be at least 1");
> > > + exit(1);
> > > + }
> > > + }
> > >
> > > This will no longer be true.
> >
> > Attached the updated version patch.
> >
>
> Thanks Sawada-san for the re-based patch.
>
> I reviewed and tested this patch.  Patch looks good to me.

As offline, suggested by Amit Kapila, I verified vacuumdb "-P" option
functionality with older versions(<13) and also I tested vacuumdb by
giving "-j" option with "-P". All are working as per expectation and I
didn't find any issue with these options.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-23 Thread Mahendra Singh Thalor
On Wed, 22 Jan 2020 at 12:48, Masahiko Sawada
 wrote:
>
> On Wed, 22 Jan 2020 at 11:23, Amit Kapila  wrote:
> >
> > On Wed, Jan 22, 2020 at 7:14 AM Masahiko Sawada
> >  wrote:
> > >
> > > Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
> > > what I did in the previous version patch.
> > >
> >
> > Would you like to resubmit your vacuumdb utility patch for this
> > enhancement?  I see some old version of it and it seems to me that you
> > need to update that patch.
> >
> > + if (optarg != NULL)
> > + {
> > + parallel_workers = atoi(optarg);
> > + if (parallel_workers <= 0)
> > + {
> > + pg_log_error("number of parallel workers must be at least 1");
> > + exit(1);
> > + }
> > + }
> >
> > This will no longer be true.
>
> Attached the updated version patch.
>

Thanks Sawada-san for the re-based patch.

I reviewed and tested this patch.  Patch looks good to me.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Masahiko Sawada
On Wed, 22 Jan 2020 at 11:23, Amit Kapila  wrote:
>
> On Wed, Jan 22, 2020 at 7:14 AM Masahiko Sawada
>  wrote:
> >
> > Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
> > what I did in the previous version patch.
> >
>
> Would you like to resubmit your vacuumdb utility patch for this
> enhancement?  I see some old version of it and it seems to me that you
> need to update that patch.
>
> + if (optarg != NULL)
> + {
> + parallel_workers = atoi(optarg);
> + if (parallel_workers <= 0)
> + {
> + pg_log_error("number of parallel workers must be at least 1");
> + exit(1);
> + }
> + }
>
> This will no longer be true.

Attached the updated version patch.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v36-0001-Add-paralell-P-option-to-vacuumdb-command.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Amit Kapila
On Wed, Jan 22, 2020 at 7:14 AM Masahiko Sawada
 wrote:
>
> Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
> what I did in the previous version patch.
>

Would you like to resubmit your vacuumdb utility patch for this
enhancement?  I see some old version of it and it seems to me that you
need to update that patch.

+ if (optarg != NULL)
+ {
+ parallel_workers = atoi(optarg);
+ if (parallel_workers <= 0)
+ {
+ pg_log_error("number of parallel workers must be at least 1");
+ exit(1);
+ }
+ }

This will no longer be true.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Amit Kapila
On Wed, Jan 22, 2020 at 7:14 AM Masahiko Sawada
 wrote:
>
> On Tue, 21 Jan 2020 at 18:16, Amit Kapila  wrote:
> >
> >
> > I have reproduced the issue by defining MaxAllocSize as 1024 and
> > then during debugging, skipped the check related to LAZY_ALLOC_TUPLES.
> > After patch, it fixes the problem for me.  I have slightly modified
> > your patch to define the macros on the lines of existing macros
> > TXID_SNAPSHOT_SIZE and TXID_SNAPSHOT_MAX_NXIP.  What do you think
> > about it?
>
> Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
> what I did in the previous version patch.
>

Pushed after making the change suggested by Dilip.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Masahiko Sawada
On Tue, 21 Jan 2020 at 18:16, Amit Kapila  wrote:
>
> On Tue, Jan 21, 2020 at 12:51 PM Masahiko Sawada
>  wrote:
> >
> > On Tue, 21 Jan 2020 at 16:13, Amit Kapila  wrote:
> > >
> > > SizeOfLVDeadTuplesHeader is not defined by patch.  Do you think it
> > > makes sense to add a comment here about the calculation?
> >
> > Oops, it should be SizeOfLVDeadTuples. Attached updated version.
> >
> > I defined two macros: SizeOfLVDeadTuples is the size of LVDeadTuples
> > struct and SizeOfDeadTuples is the size including LVDeadTuples struct
> > and dead tuples.
> >
>
> I have reproduced the issue by defining MaxAllocSize as 1024 and
> then during debugging, skipped the check related to LAZY_ALLOC_TUPLES.
> After patch, it fixes the problem for me.  I have slightly modified
> your patch to define the macros on the lines of existing macros
> TXID_SNAPSHOT_SIZE and TXID_SNAPSHOT_MAX_NXIP.  What do you think
> about it?

Thank you for updating the patch. Yeah MAXDEADTUPLES is better than
what I did in the previous version patch.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Dilip Kumar
On Tue, Jan 21, 2020 at 2:46 PM Amit Kapila  wrote:
>
> On Tue, Jan 21, 2020 at 12:51 PM Masahiko Sawada
>  wrote:
> >
> > On Tue, 21 Jan 2020 at 16:13, Amit Kapila  wrote:
> > >
> > > SizeOfLVDeadTuplesHeader is not defined by patch.  Do you think it
> > > makes sense to add a comment here about the calculation?
> >
> > Oops, it should be SizeOfLVDeadTuples. Attached updated version.
> >
> > I defined two macros: SizeOfLVDeadTuples is the size of LVDeadTuples
> > struct and SizeOfDeadTuples is the size including LVDeadTuples struct
> > and dead tuples.
> >
>
> I have reproduced the issue by defining MaxAllocSize as 1024 and
> then during debugging, skipped the check related to LAZY_ALLOC_TUPLES.
> After patch, it fixes the problem for me.  I have slightly modified
> your patch to define the macros on the lines of existing macros
> TXID_SNAPSHOT_SIZE and TXID_SNAPSHOT_MAX_NXIP.  What do you think
> about it?
>
> Andres, see if you get a chance to run the test again with the
> attached patch, otherwise, I will commit it tomorrow morning.
>
Patch looks fine to me except, we better use parentheses for the
variable passed in macro.

+#define MAXDEADTUPLES(max_size) \
+ ((max_size - offsetof(LVDeadTuples, itemptrs)) / sizeof(ItemPointerData))
change to -> (((max_size) - offsetof(LVDeadTuples, itemptrs)) /
sizeof(ItemPointerData))

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-21 Thread Amit Kapila
On Tue, Jan 21, 2020 at 12:51 PM Masahiko Sawada
 wrote:
>
> On Tue, 21 Jan 2020 at 16:13, Amit Kapila  wrote:
> >
> > SizeOfLVDeadTuplesHeader is not defined by patch.  Do you think it
> > makes sense to add a comment here about the calculation?
>
> Oops, it should be SizeOfLVDeadTuples. Attached updated version.
>
> I defined two macros: SizeOfLVDeadTuples is the size of LVDeadTuples
> struct and SizeOfDeadTuples is the size including LVDeadTuples struct
> and dead tuples.
>

I have reproduced the issue by defining MaxAllocSize as 1024 and
then during debugging, skipped the check related to LAZY_ALLOC_TUPLES.
After patch, it fixes the problem for me.  I have slightly modified
your patch to define the macros on the lines of existing macros
TXID_SNAPSHOT_SIZE and TXID_SNAPSHOT_MAX_NXIP.  What do you think
about it?

Andres, see if you get a chance to run the test again with the
attached patch, otherwise, I will commit it tomorrow morning.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


fix_max_dead_tuples_v3.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-20 Thread Masahiko Sawada
On Tue, 21 Jan 2020 at 16:13, Amit Kapila  wrote:
>
> On Tue, Jan 21, 2020 at 12:11 PM Masahiko Sawada
>  wrote:
> >
> > On Tue, 21 Jan 2020 at 15:35, Amit Kapila  wrote:
> > >
> > > On Tue, Jan 21, 2020 at 11:30 AM Andres Freund  wrote:
> > > >
> > > > Hi,
> > > >
> > > > On 2020-01-20 09:09:35 +0530, Amit Kapila wrote:
> > > > > Pushed, after fixing these two comments.
> > > >
> > > > When attempting to vacuum a large table I just got:
> > > >
> > > > postgres=# vacuum FREEZE ;
> > > > ERROR:  invalid memory alloc request size 1073741828
> > > >
> > > > #0  palloc (size=1073741828) at 
> > > > /mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959
> > > > #1  0x56452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, 
> > > > vacrelstats=0x56452e5ab0e8, relblocks=24686152)
> > > > at 
> > > > /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741
> > > > #2  lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, 
> > > > vacrelstats=, params=0x7ffdf8c00290, onerel= > > > out>)
> > > > at 
> > > > /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786
> > > > #3  heap_vacuum_rel (onerel=, params=0x7ffdf8c00290, 
> > > > bstrategy=)
> > > > at 
> > > > /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472
> > > > #4  0x56452cd8b42c in table_relation_vacuum (bstrategy= > > > out>, params=0x7ffdf8c00290, rel=0x7fbcdff1e248)
> > > > at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450
> > > > #5  vacuum_rel (relid=16454, relation=, 
> > > > params=params@entry=0x7ffdf8c00290) at 
> > > > /mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882
> > > >
> > > > Looks to me that the calculation moved into compute_max_dead_tuples()
> > > > continues to use use an allocation ceiling
> > > > maxtuples = Min(maxtuples, MaxAllocSize / 
> > > > sizeof(ItemPointerData));
> > > > but the actual allocation now is
> > > >
> > > > #define SizeOfLVDeadTuples(cnt) \
> > > > add_size((offsetof(LVDeadTuples, itemptrs)), \
> > > >  mul_size(sizeof(ItemPointerData), cnt))
> > > >
> > > > i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into
> > > > account.
> > > >
> > >
> > > Right, I think we need to take into account in both the places in
> > > compute_max_dead_tuples():
> > >
> > > maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
> > > ..
> > > maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
> > >
> > >
> >
> > Agreed. Attached patch should fix this issue.
> >
>
> if (useindex)
>   {
> - maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
> + maxtuples = ((vac_work_mem * 1024L) - SizeOfLVDeadTuplesHeader) /
> sizeof(ItemPointerData);
>
> SizeOfLVDeadTuplesHeader is not defined by patch.  Do you think it
> makes sense to add a comment here about the calculation?

Oops, it should be SizeOfLVDeadTuples. Attached updated version.

I defined two macros: SizeOfLVDeadTuples is the size of LVDeadTuples
struct and SizeOfDeadTuples is the size including LVDeadTuples struct
and dead tuples.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


fix_max_dead_tuples_v2.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-20 Thread Amit Kapila
On Tue, Jan 21, 2020 at 12:11 PM Masahiko Sawada
 wrote:
>
> On Tue, 21 Jan 2020 at 15:35, Amit Kapila  wrote:
> >
> > On Tue, Jan 21, 2020 at 11:30 AM Andres Freund  wrote:
> > >
> > > Hi,
> > >
> > > On 2020-01-20 09:09:35 +0530, Amit Kapila wrote:
> > > > Pushed, after fixing these two comments.
> > >
> > > When attempting to vacuum a large table I just got:
> > >
> > > postgres=# vacuum FREEZE ;
> > > ERROR:  invalid memory alloc request size 1073741828
> > >
> > > #0  palloc (size=1073741828) at 
> > > /mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959
> > > #1  0x56452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, 
> > > vacrelstats=0x56452e5ab0e8, relblocks=24686152)
> > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741
> > > #2  lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, 
> > > vacrelstats=, params=0x7ffdf8c00290, onerel= > > out>)
> > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786
> > > #3  heap_vacuum_rel (onerel=, params=0x7ffdf8c00290, 
> > > bstrategy=)
> > > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472
> > > #4  0x56452cd8b42c in table_relation_vacuum (bstrategy= > > out>, params=0x7ffdf8c00290, rel=0x7fbcdff1e248)
> > > at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450
> > > #5  vacuum_rel (relid=16454, relation=, 
> > > params=params@entry=0x7ffdf8c00290) at 
> > > /mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882
> > >
> > > Looks to me that the calculation moved into compute_max_dead_tuples()
> > > continues to use use an allocation ceiling
> > > maxtuples = Min(maxtuples, MaxAllocSize / 
> > > sizeof(ItemPointerData));
> > > but the actual allocation now is
> > >
> > > #define SizeOfLVDeadTuples(cnt) \
> > > add_size((offsetof(LVDeadTuples, itemptrs)), \
> > >  mul_size(sizeof(ItemPointerData), cnt))
> > >
> > > i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into
> > > account.
> > >
> >
> > Right, I think we need to take into account in both the places in
> > compute_max_dead_tuples():
> >
> > maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
> > ..
> > maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
> >
> >
>
> Agreed. Attached patch should fix this issue.
>

if (useindex)
  {
- maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+ maxtuples = ((vac_work_mem * 1024L) - SizeOfLVDeadTuplesHeader) /
sizeof(ItemPointerData);

SizeOfLVDeadTuplesHeader is not defined by patch.  Do you think it
makes sense to add a comment here about the calculation?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-20 Thread Masahiko Sawada
On Tue, 21 Jan 2020 at 15:35, Amit Kapila  wrote:
>
> On Tue, Jan 21, 2020 at 11:30 AM Andres Freund  wrote:
> >
> > Hi,
> >
> > On 2020-01-20 09:09:35 +0530, Amit Kapila wrote:
> > > Pushed, after fixing these two comments.
> >
> > When attempting to vacuum a large table I just got:
> >
> > postgres=# vacuum FREEZE ;
> > ERROR:  invalid memory alloc request size 1073741828
> >
> > #0  palloc (size=1073741828) at 
> > /mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959
> > #1  0x56452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, 
> > vacrelstats=0x56452e5ab0e8, relblocks=24686152)
> > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741
> > #2  lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, 
> > vacrelstats=, params=0x7ffdf8c00290, onerel=)
> > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786
> > #3  heap_vacuum_rel (onerel=, params=0x7ffdf8c00290, 
> > bstrategy=)
> > at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472
> > #4  0x56452cd8b42c in table_relation_vacuum (bstrategy=, 
> > params=0x7ffdf8c00290, rel=0x7fbcdff1e248)
> > at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450
> > #5  vacuum_rel (relid=16454, relation=, 
> > params=params@entry=0x7ffdf8c00290) at 
> > /mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882
> >
> > Looks to me that the calculation moved into compute_max_dead_tuples()
> > continues to use use an allocation ceiling
> > maxtuples = Min(maxtuples, MaxAllocSize / 
> > sizeof(ItemPointerData));
> > but the actual allocation now is
> >
> > #define SizeOfLVDeadTuples(cnt) \
> > add_size((offsetof(LVDeadTuples, itemptrs)), \
> >  mul_size(sizeof(ItemPointerData), cnt))
> >
> > i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into
> > account.
> >
>
> Right, I think we need to take into account in both the places in
> compute_max_dead_tuples():
>
> maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
> ..
> maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
>
>

Agreed. Attached patch should fix this issue.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


fix_max_dead_tuples.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-20 Thread Amit Kapila
On Tue, Jan 21, 2020 at 11:30 AM Andres Freund  wrote:
>
> Hi,
>
> On 2020-01-20 09:09:35 +0530, Amit Kapila wrote:
> > Pushed, after fixing these two comments.
>
> When attempting to vacuum a large table I just got:
>
> postgres=# vacuum FREEZE ;
> ERROR:  invalid memory alloc request size 1073741828
>
> #0  palloc (size=1073741828) at 
> /mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959
> #1  0x56452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, 
> vacrelstats=0x56452e5ab0e8, relblocks=24686152)
> at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741
> #2  lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, 
> vacrelstats=, params=0x7ffdf8c00290, onerel=)
> at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786
> #3  heap_vacuum_rel (onerel=, params=0x7ffdf8c00290, 
> bstrategy=)
> at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472
> #4  0x56452cd8b42c in table_relation_vacuum (bstrategy=, 
> params=0x7ffdf8c00290, rel=0x7fbcdff1e248)
> at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450
> #5  vacuum_rel (relid=16454, relation=, 
> params=params@entry=0x7ffdf8c00290) at 
> /mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882
>
> Looks to me that the calculation moved into compute_max_dead_tuples()
> continues to use use an allocation ceiling
> maxtuples = Min(maxtuples, MaxAllocSize / 
> sizeof(ItemPointerData));
> but the actual allocation now is
>
> #define SizeOfLVDeadTuples(cnt) \
> add_size((offsetof(LVDeadTuples, itemptrs)), \
>  mul_size(sizeof(ItemPointerData), cnt))
>
> i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into
> account.
>

Right, I think we need to take into account in both the places in
compute_max_dead_tuples():

maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
..
maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-20 Thread Andres Freund
Hi,

On 2020-01-20 09:09:35 +0530, Amit Kapila wrote:
> Pushed, after fixing these two comments.

When attempting to vacuum a large table I just got:

postgres=# vacuum FREEZE ;
ERROR:  invalid memory alloc request size 1073741828

#0  palloc (size=1073741828) at 
/mnt/tools/src/postgresql/src/backend/utils/mmgr/mcxt.c:959
#1  0x56452cc45cac in lazy_space_alloc (vacrelstats=0x56452e5ab0e8, 
vacrelstats=0x56452e5ab0e8, relblocks=24686152)
at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:2741
#2  lazy_scan_heap (aggressive=true, nindexes=1, Irel=0x56452e5ab1c8, 
vacrelstats=, params=0x7ffdf8c00290, onerel=)
at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:786
#3  heap_vacuum_rel (onerel=, params=0x7ffdf8c00290, 
bstrategy=)
at /mnt/tools/src/postgresql/src/backend/access/heap/vacuumlazy.c:472
#4  0x56452cd8b42c in table_relation_vacuum (bstrategy=, 
params=0x7ffdf8c00290, rel=0x7fbcdff1e248)
at /mnt/tools/src/postgresql/src/include/access/tableam.h:1450
#5  vacuum_rel (relid=16454, relation=, 
params=params@entry=0x7ffdf8c00290) at 
/mnt/tools/src/postgresql/src/backend/commands/vacuum.c:1882

Looks to me that the calculation moved into compute_max_dead_tuples()
continues to use use an allocation ceiling
maxtuples = Min(maxtuples, MaxAllocSize / 
sizeof(ItemPointerData));
but the actual allocation now is

#define SizeOfLVDeadTuples(cnt) \
add_size((offsetof(LVDeadTuples, itemptrs)), \
 mul_size(sizeof(ItemPointerData), cnt))

i.e. the overhead of offsetof(LVDeadTuples, itemptrs) is not taken into
account.

Regards,

Andres




Re: [HACKERS] Block level parallel vacuum

2020-01-19 Thread Masahiko Sawada
On Mon, 20 Jan 2020 at 12:39, Amit Kapila  wrote:
>
> On Fri, Jan 17, 2020 at 4:35 PM Mahendra Singh Thalor
>  wrote:
> >
> > Below are some review comments for v50 patch.
> >
> > 1.
> > +LVShared
> > +LVSharedIndStats
> > +LVParallelState
> >  LWLock
> >
> > I think, LVParallelState should come before LVSharedIndStats.
> >
> > 2.
> > +/*
> > + * It is possible that parallel context is initialized with fewer 
> > workers
> > + * then the number of indexes that need a separate worker in the 
> > current
> > + * phase, so we need to consider it.  See 
> > compute_parallel_vacuum_workers.
> > + */
> >
> > This comment is confusing me. I think, "then" should be replaced with 
> > "than".
> >
>
> Pushed, after fixing these two comments.

Thank you for committing!

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-19 Thread Amit Kapila
On Fri, Jan 17, 2020 at 4:35 PM Mahendra Singh Thalor
 wrote:
>
> Below are some review comments for v50 patch.
>
> 1.
> +LVShared
> +LVSharedIndStats
> +LVParallelState
>  LWLock
>
> I think, LVParallelState should come before LVSharedIndStats.
>
> 2.
> +/*
> + * It is possible that parallel context is initialized with fewer workers
> + * then the number of indexes that need a separate worker in the current
> + * phase, so we need to consider it.  See 
> compute_parallel_vacuum_workers.
> + */
>
> This comment is confusing me. I think, "then" should be replaced with "than".
>

Pushed, after fixing these two comments.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-19 Thread Amit Kapila
On Sun, Jan 19, 2020 at 2:15 AM Peter Geoghegan  wrote:
>
> On Fri, Jan 17, 2020 at 1:18 AM Amit Kapila  wrote:
> > Thanks for doing this test again.  In the attached patch, I have
> > addressed all the comments and modified a few comments.
>
> I am in favor of the general idea of parallel VACUUM that parallelizes
> the processing of each index (I haven't looked at the patch, though).
> I observed something during a recent benchmark of the deduplication
> patch that seems like it might be relevant to parallel VACUUM. This
> happened during a recreation of the original WARM benchmark, which is
> described here:
>
> https://www.postgresql.org/message-id/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com
>
> (There is an extra pgbench_accounts index on abalance, plus 4 indexes
> on large text columns with filler MD5 hashes, all of which are
> random.)
>
> On the master branch, I can clearly observe that the "filler" MD5
> indexes are bloated to a degree that is affected by the order of their
> original creation/pg_class OID order. These are all indexes that
> become bloated purely due to "version churn" -- or what I like to call
> "unnecessary" page splits. The keys used in each pgbench_accounts
> logical row never change, except in the case of the extra abalance
> index (the idea is to prevent all HOT updates without ever updating
> most indexed columns). I noticed that pgb_a_filler1 is a bit less
> bloated than pgb_a_filler2, which is a little less bloated than
> pgb_a_filler3, which is a little less bloated than pgb_a_filler4. Even
> after 4 hours, and even though the "shape" of each index is identical.
> This demonstrates an important general principle about vacuuming
> indexes: timeliness can matter a lot.
>
> In general, a big benefit of the deduplication patch is that it "buys
> time" for VACUUM to run before "unnecessary" page splits can occur --
> that is why the deduplication patch prevents *all* page splits in
> these "filler" indexes, whereas on the master branch the filler
> indexes are about 2x larger (the exact amount varies based on VACUUM
> processing order, at least earlier on).
>
> For tables with several indexes, giving each index its own VACUUM
> worker process will prevent "unnecessary" page splits caused by
> version churn, simply because VACUUM will start to clean each index
> sooner than it would compared to serial processing (except for the
> "lucky" first index). There is no "lucky" first index that gets
> preferential treatment -- presumably VACUUM will start processing each
> index at the same time with this patch, making each index equally
> "lucky".
>
> I think that there may even be a *complementary* effect with parallel
> VACUUM, though I haven't tested that theory. Deduplication "buys time"
> for VACUUM to run, while at the same time VACUUM takes less time to
> show up and prevent "unnecessary" page splits. My guess is that these
> two seemingly unrelated patches may actually address this "unnecessary
> page split" problem from two completely different angles, with an
> overall effect that is greater than the sum of its parts.
>

Good analysis and I agree that the parallel vacuum patch can help in
such cases.  However, as of now, it only works via Vacuum command, so
some user intervention is required to realize the benefit.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-18 Thread Peter Geoghegan
On Fri, Jan 17, 2020 at 1:18 AM Amit Kapila  wrote:
> Thanks for doing this test again.  In the attached patch, I have
> addressed all the comments and modified a few comments.

I am in favor of the general idea of parallel VACUUM that parallelizes
the processing of each index (I haven't looked at the patch, though).
I observed something during a recent benchmark of the deduplication
patch that seems like it might be relevant to parallel VACUUM. This
happened during a recreation of the original WARM benchmark, which is
described here:

https://www.postgresql.org/message-id/CABOikdMNy6yowA%2BwTGK9RVd8iw%2BCzqHeQSGpW7Yka_4RSZ_LOQ%40mail.gmail.com

(There is an extra pgbench_accounts index on abalance, plus 4 indexes
on large text columns with filler MD5 hashes, all of which are
random.)

On the master branch, I can clearly observe that the "filler" MD5
indexes are bloated to a degree that is affected by the order of their
original creation/pg_class OID order. These are all indexes that
become bloated purely due to "version churn" -- or what I like to call
"unnecessary" page splits. The keys used in each pgbench_accounts
logical row never change, except in the case of the extra abalance
index (the idea is to prevent all HOT updates without ever updating
most indexed columns). I noticed that pgb_a_filler1 is a bit less
bloated than pgb_a_filler2, which is a little less bloated than
pgb_a_filler3, which is a little less bloated than pgb_a_filler4. Even
after 4 hours, and even though the "shape" of each index is identical.
This demonstrates an important general principle about vacuuming
indexes: timeliness can matter a lot.

In general, a big benefit of the deduplication patch is that it "buys
time" for VACUUM to run before "unnecessary" page splits can occur --
that is why the deduplication patch prevents *all* page splits in
these "filler" indexes, whereas on the master branch the filler
indexes are about 2x larger (the exact amount varies based on VACUUM
processing order, at least earlier on).

For tables with several indexes, giving each index its own VACUUM
worker process will prevent "unnecessary" page splits caused by
version churn, simply because VACUUM will start to clean each index
sooner than it would compared to serial processing (except for the
"lucky" first index). There is no "lucky" first index that gets
preferential treatment -- presumably VACUUM will start processing each
index at the same time with this patch, making each index equally
"lucky".

I think that there may even be a *complementary* effect with parallel
VACUUM, though I haven't tested that theory. Deduplication "buys time"
for VACUUM to run, while at the same time VACUUM takes less time to
show up and prevent "unnecessary" page splits. My guess is that these
two seemingly unrelated patches may actually address this "unnecessary
page split" problem from two completely different angles, with an
overall effect that is greater than the sum of its parts.

While the difference in size of each filler index on the master branch
wasn't that significant on its own, it's still interesting. It's
probably quite workload dependent.

-- 
Peter Geoghegan




Re: [HACKERS] Block level parallel vacuum

2020-01-17 Thread Mahendra Singh Thalor
On Fri, 17 Jan 2020 at 14:47, Amit Kapila  wrote:
>
> On Fri, Jan 17, 2020 at 12:51 PM Dilip Kumar 
wrote:
> >
> > On Fri, Jan 17, 2020 at 11:39 AM Dilip Kumar 
wrote:
> > I have performed cost delay testing on the latest test(I have used
> > same script as attahced in [1] and [2].
> > vacuum_cost_delay = 10
> > vacuum_cost_limit = 2000
> >
> > Observation: As we have concluded earlier, the delay time is in sync
> > with the I/O performed by the worker
> > and the total delay (heap + index) is almost the same as the
> > non-parallel operation.
> >
>
> Thanks for doing this test again.  In the attached patch, I have
> addressed all the comments and modified a few comments.
>

Hi,
Below are some review comments for v50 patch.

1.
+LVShared
+LVSharedIndStats
+LVParallelState
 LWLock

I think, LVParallelState should come before LVSharedIndStats.

2.
+/*
+ * It is possible that parallel context is initialized with fewer
workers
+ * then the number of indexes that need a separate worker in the
current
+ * phase, so we need to consider it.  See
compute_parallel_vacuum_workers.
+ */

This comment is confusing me. I think, "then" should be replaced with
"than".

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-17 Thread Amit Kapila
On Fri, Jan 17, 2020 at 12:51 PM Dilip Kumar  wrote:
>
> On Fri, Jan 17, 2020 at 11:39 AM Dilip Kumar  wrote:
> I have performed cost delay testing on the latest test(I have used
> same script as attahced in [1] and [2].
> vacuum_cost_delay = 10
> vacuum_cost_limit = 2000
>
> Observation: As we have concluded earlier, the delay time is in sync
> with the I/O performed by the worker
> and the total delay (heap + index) is almost the same as the
> non-parallel operation.
>

Thanks for doing this test again.  In the attached patch, I have
addressed all the comments and modified a few comments.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v50-0001-Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 11:39 AM Dilip Kumar  wrote:
I have performed cost delay testing on the latest test(I have used
same script as attahced in [1] and [2].
vacuum_cost_delay = 10
vacuum_cost_limit = 2000

Observation: As we have concluded earlier, the delay time is in sync
with the I/O performed by the worker
and the total delay (heap + index) is almost the same as the
non-parallel operation.

test1:[1]

Vacuum non-parallel

WARNING:  VacuumCostTotalDelay=11332.32

Vacuum 2 workers
WARNING:  worker 0 delay=171.085000 total io=34288 hit=22208 miss=0 dirty=604
WARNING:  worker 1 delay=87.79 total io=17910 hit=17890 miss=0 dirty=1
WARNING:  worker 2 delay=88.62 total io=17910 hit=17890 miss=0 dirty=1

WARNING:  VacuumCostTotalDelay=11505.65

Vacuum 4 workers
WARNING:  worker 0 delay=87.75 total io=17910 hit=17890 miss=0 dirty=1
WARNING:  worker 1 delay=89.155000 total io=17910 hit=17890 miss=0 dirty=1
WARNING:  worker 2 delay=87.08 total io=17910 hit=17890 miss=0 dirty=1
WARNING:  worker 3 delay=78.745000 total io=16378 hit=4318 miss=0 dirty=603

WARNING:  VacuumCostTotalDelay=11590.68


test2:[2]

Vacuum non-parallel
WARNING:  VacuumCostTotalDelay=22835.97

Vacuum 2 workers
WARNING:  worker 0 delay=345.55 total io=69338 hit=45338 miss=0 dirty=1200
WARNING:  worker 1 delay=177.15 total io=35807 hit=35787 miss=0 dirty=1
WARNING:  worker 2 delay=178.105000 total io=35807 hit=35787 miss=0 dirty=1
WARNING:  VacuumCostTotalDelay=23191.405000


Vacuum 4 workers
WARNING:  worker 0 delay=177.265000 total io=35807 hit=35787 miss=0 dirty=1
WARNING:  worker 1 delay=177.175000 total io=35807 hit=35787 miss=0 dirty=1
WARNING:  worker 2 delay=177.385000 total io=35807 hit=35787 miss=0 dirty=1
WARNING:  worker 3 delay=166.515000 total io=33531 hit=9551 miss=0 dirty=1199
WARNING:  VacuumCostTotalDelay=23357.115000



[1] 
https://www.postgresql.org/message-id/CAFiTN-tFLN%3Dvdu5Ra-23E9_7Z1JXkk5MkRY3Bkj2zAoWK7fULA%40mail.gmail.com
[2] 
https://www.postgresql.org/message-id/CAFiTN-tC%3DNcvcEd%2B5J62fR8-D8x7EHuVi2xhS-0DMf1bnJs4hw%40mail.gmail.com

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 11:34 AM Amit Kapila  wrote:
>
> On Fri, Jan 17, 2020 at 11:00 AM Dilip Kumar  wrote:
> >
> > On Fri, Jan 17, 2020 at 10:44 AM Amit Kapila  
> > wrote:
> > >
> > > On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar  wrote:
> > > >
> > > > I have few small comments.
> > > >
> > > > 1.
> > > > logical streaming for large in-progress transactions+
> > > > + /* Can't perform vacuum in parallel */
> > > > + if (parallel_workers <= 0)
> > > > + {
> > > > + pfree(can_parallel_vacuum);
> > > > + return lps;
> > > > + }
> > > >
> > > > why are we checking parallel_workers <= 0, Function
> > > > compute_parallel_vacuum_workers only returns 0 or greater than 0
> > > > so isn't it better to just check if (parallel_workers == 0) ?
> > > >
> > >
> > > Why to have such an assumption about
> > > compute_parallel_vacuum_workers()?  The function
> > > compute_parallel_vacuum_workers() returns int, so such a check
> > > (<= 0) seems reasonable to me.
> >
> > Okay so I should probably change my statement to why
> > compute_parallel_vacuum_workers is returning "int" instead of uint?
> >
>
> Hmm, I think the number of workers at most places is int, so it is
> better to return int here which will keep it consistent with how we do
> at other places.  See, the similar usage in compute_parallel_worker.

Okay, I see.

>
>   I
> > mean when this function is designed to return 0 or more worker why to
> > make it return int and then handle extra values on caller.  Am I
> > missing something, can it really return negative in some cases?
> >
> > I find the below code in "compute_parallel_vacuum_workers" a bit confusing
> >
> > +static int
> > +compute_parallel_vacuum_workers(Relation *Irel, int nindexes, int 
> > nrequested,
> > + bool *can_parallel_vacuum)
> > +{
> > ..
> > + /* The leader process takes one index */
> > + nindexes_parallel--;--> nindexes_parallel can become -1
> > +
> > + /* No index supports parallel vacuum */
> > + if (nindexes_parallel == 0) .  -> Now if it is 0 then return 0 but
> > if its -1 then continue. seems strange no?  I think here itself we can
> > handle if (nindexes_parallel <= 0), that will make code cleaner.
> > + return 0;
> > +
>
> I think this got recently introduce by one of my changes based on the
> comment by Mahendra, we can adjust this check.

Ok
>
> > > >
> > > > I don't like the idea of first initializing the
> > > > VacuumSharedCostBalance with lps->lvshared->cost_balance and then
> > > > uninitialize if nworkers_launched is 0.
> > > > I am not sure why do we need to initialize VacuumSharedCostBalance
> > > > here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
> > > > VacuumCostBalance);?
> > > > I think we can initialize it only if nworkers_launched > 0 then we can
> > > > get rid of the else branch completely.
> > > >
> > >
> > > No, we can't initialize after nworkers_launched > 0 because by that
> > > time some workers would have already tried to access the shared cost
> > > balance.  So, it needs to be done before launching the workers as is
> > > done in code.  We can probably add a comment.
> > I don't think so, VacuumSharedCostBalance is a process local which is
> > just pointing to the shared memory variable right?
> >
> > and each process has to point it to the shared memory and that we are
> > already doing in parallel_vacuum_main.  So we can initialize it after
> > worker is launched.
> > Basically code will look like below
> >
> > pg_atomic_write_u32(&(lps->lvshared->cost_balance), VacuumCostBalance);
> > pg_atomic_write_u32(&(lps->lvshared->active_nworkers), 0);
> >
>
> oh, I thought you were telling to initialize the shared memory itself
> after launching the workers.  However, you are asking to change the
> usage of the local variable, I think we can do that.

Okay.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Amit Kapila
On Fri, Jan 17, 2020 at 11:00 AM Dilip Kumar  wrote:
>
> On Fri, Jan 17, 2020 at 10:44 AM Amit Kapila  wrote:
> >
> > On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar  wrote:
> > >
> > > I have few small comments.
> > >
> > > 1.
> > > logical streaming for large in-progress transactions+
> > > + /* Can't perform vacuum in parallel */
> > > + if (parallel_workers <= 0)
> > > + {
> > > + pfree(can_parallel_vacuum);
> > > + return lps;
> > > + }
> > >
> > > why are we checking parallel_workers <= 0, Function
> > > compute_parallel_vacuum_workers only returns 0 or greater than 0
> > > so isn't it better to just check if (parallel_workers == 0) ?
> > >
> >
> > Why to have such an assumption about
> > compute_parallel_vacuum_workers()?  The function
> > compute_parallel_vacuum_workers() returns int, so such a check
> > (<= 0) seems reasonable to me.
>
> Okay so I should probably change my statement to why
> compute_parallel_vacuum_workers is returning "int" instead of uint?
>

Hmm, I think the number of workers at most places is int, so it is
better to return int here which will keep it consistent with how we do
at other places.  See, the similar usage in compute_parallel_worker.

  I
> mean when this function is designed to return 0 or more worker why to
> make it return int and then handle extra values on caller.  Am I
> missing something, can it really return negative in some cases?
>
> I find the below code in "compute_parallel_vacuum_workers" a bit confusing
>
> +static int
> +compute_parallel_vacuum_workers(Relation *Irel, int nindexes, int nrequested,
> + bool *can_parallel_vacuum)
> +{
> ..
> + /* The leader process takes one index */
> + nindexes_parallel--;--> nindexes_parallel can become -1
> +
> + /* No index supports parallel vacuum */
> + if (nindexes_parallel == 0) .  -> Now if it is 0 then return 0 but
> if its -1 then continue. seems strange no?  I think here itself we can
> handle if (nindexes_parallel <= 0), that will make code cleaner.
> + return 0;
> +

I think this got recently introduce by one of my changes based on the
comment by Mahendra, we can adjust this check.

> > >
> > > I don't like the idea of first initializing the
> > > VacuumSharedCostBalance with lps->lvshared->cost_balance and then
> > > uninitialize if nworkers_launched is 0.
> > > I am not sure why do we need to initialize VacuumSharedCostBalance
> > > here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
> > > VacuumCostBalance);?
> > > I think we can initialize it only if nworkers_launched > 0 then we can
> > > get rid of the else branch completely.
> > >
> >
> > No, we can't initialize after nworkers_launched > 0 because by that
> > time some workers would have already tried to access the shared cost
> > balance.  So, it needs to be done before launching the workers as is
> > done in code.  We can probably add a comment.
> I don't think so, VacuumSharedCostBalance is a process local which is
> just pointing to the shared memory variable right?
>
> and each process has to point it to the shared memory and that we are
> already doing in parallel_vacuum_main.  So we can initialize it after
> worker is launched.
> Basically code will look like below
>
> pg_atomic_write_u32(&(lps->lvshared->cost_balance), VacuumCostBalance);
> pg_atomic_write_u32(&(lps->lvshared->active_nworkers), 0);
>

oh, I thought you were telling to initialize the shared memory itself
after launching the workers.  However, you are asking to change the
usage of the local variable, I think we can do that.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 10:44 AM Amit Kapila  wrote:
>
> On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar  wrote:
> >
> > I have few small comments.
> >
> > 1.
> > logical streaming for large in-progress transactions+
> > + /* Can't perform vacuum in parallel */
> > + if (parallel_workers <= 0)
> > + {
> > + pfree(can_parallel_vacuum);
> > + return lps;
> > + }
> >
> > why are we checking parallel_workers <= 0, Function
> > compute_parallel_vacuum_workers only returns 0 or greater than 0
> > so isn't it better to just check if (parallel_workers == 0) ?
> >
>
> Why to have such an assumption about
> compute_parallel_vacuum_workers()?  The function
> compute_parallel_vacuum_workers() returns int, so such a check
> (<= 0) seems reasonable to me.

Okay so I should probably change my statement to why
compute_parallel_vacuum_workers is returning "int" instead of uint?  I
mean when this function is designed to return 0 or more worker why to
make it return int and then handle extra values on caller.  Am I
missing something, can it really return negative in some cases?

I find the below code in "compute_parallel_vacuum_workers" a bit confusing

+static int
+compute_parallel_vacuum_workers(Relation *Irel, int nindexes, int nrequested,
+ bool *can_parallel_vacuum)
+{
..
+ /* The leader process takes one index */
+ nindexes_parallel--;--> nindexes_parallel can become -1
+
+ /* No index supports parallel vacuum */
+ if (nindexes_parallel == 0) .  -> Now if it is 0 then return 0 but
if its -1 then continue. seems strange no?  I think here itself we can
handle if (nindexes_parallel <= 0), that will make code cleaner.
+ return 0;
+
+ /* Compute the parallel degree */
+ parallel_workers = (nrequested > 0) ?
+ Min(nrequested, nindexes_parallel) : nindexes_parallel;



>
> > 2.
> > +/*
> > + * Macro to check if we are in a parallel vacuum.  If true, we are in the
> > + * parallel mode and the DSM segment is initialized.
> > + */
> > +#define ParallelVacuumIsActive(lps) (((LVParallelState *) (lps)) != NULL)
> >
> > (LVParallelState *) (lps) -> this typecast is not required, just (lps)
> > != NULL should be enough.
> >
>
> I think the better idea would be to just replace it PointerIsValid
> like below. I see similar usage in other places.
> #define ParallelVacuumIsActive(lps)  PointerIsValid(lps)
Make sense to me.
>
> > 3.
> >
> > + shared->offset = MAXALIGN(add_size(SizeOfLVShared, BITMAPLEN(nindexes)));
> > + prepare_index_statistics(shared, can_parallel_vacuum, nindexes);
> > + pg_atomic_init_u32(&(shared->idx), 0);
> > + pg_atomic_init_u32(&(shared->cost_balance), 0);
> > + pg_atomic_init_u32(&(shared->active_nworkers), 0);
> >
> > I think it will look cleaner if we can initialize in the order they
> > are declared in structure.
> >
>
> Okay.
>
> > 4.
> > + VacuumSharedCostBalance = &(lps->lvshared->cost_balance);
> > + VacuumActiveNWorkers = &(lps->lvshared->active_nworkers);
> > +
> > + /*
> > + * Set up shared cost balance and the number of active workers for
> > + * vacuum delay.
> > + */
> > + pg_atomic_write_u32(VacuumSharedCostBalance, VacuumCostBalance);
> > + pg_atomic_write_u32(VacuumActiveNWorkers, 0);
> > +
> > + /*
> > + * The number of workers can vary between bulkdelete and cleanup
> > + * phase.
> > + */
> > + ReinitializeParallelWorkers(lps->pcxt, nworkers);
> > +
> > + LaunchParallelWorkers(lps->pcxt);
> > +
> > + if (lps->pcxt->nworkers_launched > 0)
> > + {
> > + /*
> > + * Reset the local cost values for leader backend as we have
> > + * already accumulated the remaining balance of heap.
> > + */
> > + VacuumCostBalance = 0;
> > + VacuumCostBalanceLocal = 0;
> > + }
> > + else
> > + {
> > + /*
> > + * Disable shared cost balance if we are not able to launch
> > + * workers.
> > + */
> > + VacuumSharedCostBalance = NULL;
> > + VacuumActiveNWorkers = NULL;
> > + }
> > +
> >
> > I don't like the idea of first initializing the
> > VacuumSharedCostBalance with lps->lvshared->cost_balance and then
> > uninitialize if nworkers_launched is 0.
> > I am not sure why do we need to initialize VacuumSharedCostBalance
> > here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
> > VacuumCostBalance);?
> > I think we can initialize it only if nworkers_launched > 0 then we can
> > get rid of the else branch completely.
> >
>
> No, we can't initialize after nworkers_launched > 0 because by that
> time some workers would have already tried to access the shared cost
> balance.  So, it needs to be done before launching the workers as is
> done in code.  We can probably add a comment.
I don't think so, VacuumSharedCostBalance is a process local which is
just pointing to the shared memory variable right?

and each process has to point it to the shared memory and that we are
already doing in parallel_vacuum_main.  So we can initialize it after
worker is launched.
Basically code will look like below

pg_atomic_write_u32(&(lps->lvshared->cost_balance), VacuumCostBalance);

Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Amit Kapila
On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar  wrote:
>
> I have few small comments.
>
> 1.
> logical streaming for large in-progress transactions+
> + /* Can't perform vacuum in parallel */
> + if (parallel_workers <= 0)
> + {
> + pfree(can_parallel_vacuum);
> + return lps;
> + }
>
> why are we checking parallel_workers <= 0, Function
> compute_parallel_vacuum_workers only returns 0 or greater than 0
> so isn't it better to just check if (parallel_workers == 0) ?
>

Why to have such an assumption about
compute_parallel_vacuum_workers()?  The function
compute_parallel_vacuum_workers() returns int, so such a check
(<= 0) seems reasonable to me.

> 2.
> +/*
> + * Macro to check if we are in a parallel vacuum.  If true, we are in the
> + * parallel mode and the DSM segment is initialized.
> + */
> +#define ParallelVacuumIsActive(lps) (((LVParallelState *) (lps)) != NULL)
>
> (LVParallelState *) (lps) -> this typecast is not required, just (lps)
> != NULL should be enough.
>

I think the better idea would be to just replace it PointerIsValid
like below. I see similar usage in other places.
#define ParallelVacuumIsActive(lps)  PointerIsValid(lps)

> 3.
>
> + shared->offset = MAXALIGN(add_size(SizeOfLVShared, BITMAPLEN(nindexes)));
> + prepare_index_statistics(shared, can_parallel_vacuum, nindexes);
> + pg_atomic_init_u32(&(shared->idx), 0);
> + pg_atomic_init_u32(&(shared->cost_balance), 0);
> + pg_atomic_init_u32(&(shared->active_nworkers), 0);
>
> I think it will look cleaner if we can initialize in the order they
> are declared in structure.
>

Okay.

> 4.
> + VacuumSharedCostBalance = &(lps->lvshared->cost_balance);
> + VacuumActiveNWorkers = &(lps->lvshared->active_nworkers);
> +
> + /*
> + * Set up shared cost balance and the number of active workers for
> + * vacuum delay.
> + */
> + pg_atomic_write_u32(VacuumSharedCostBalance, VacuumCostBalance);
> + pg_atomic_write_u32(VacuumActiveNWorkers, 0);
> +
> + /*
> + * The number of workers can vary between bulkdelete and cleanup
> + * phase.
> + */
> + ReinitializeParallelWorkers(lps->pcxt, nworkers);
> +
> + LaunchParallelWorkers(lps->pcxt);
> +
> + if (lps->pcxt->nworkers_launched > 0)
> + {
> + /*
> + * Reset the local cost values for leader backend as we have
> + * already accumulated the remaining balance of heap.
> + */
> + VacuumCostBalance = 0;
> + VacuumCostBalanceLocal = 0;
> + }
> + else
> + {
> + /*
> + * Disable shared cost balance if we are not able to launch
> + * workers.
> + */
> + VacuumSharedCostBalance = NULL;
> + VacuumActiveNWorkers = NULL;
> + }
> +
>
> I don't like the idea of first initializing the
> VacuumSharedCostBalance with lps->lvshared->cost_balance and then
> uninitialize if nworkers_launched is 0.
> I am not sure why do we need to initialize VacuumSharedCostBalance
> here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
> VacuumCostBalance);?
> I think we can initialize it only if nworkers_launched > 0 then we can
> get rid of the else branch completely.
>

No, we can't initialize after nworkers_launched > 0 because by that
time some workers would have already tried to access the shared cost
balance.  So, it needs to be done before launching the workers as is
done in code.  We can probably add a comment.

>
> + /* Carry the shared balance value to heap scan */
> + if (VacuumSharedCostBalance)
> + VacuumCostBalance = pg_atomic_read_u32(VacuumSharedCostBalance);
> +
> + if (nworkers > 0)
> + {
> + /* Disable shared cost balance */
> + VacuumSharedCostBalance = NULL;
> + VacuumActiveNWorkers = NULL;
> + }
>
> Doesn't make sense to keep them as two conditions, we can combine them as 
> below
>
> /* If shared costing is enable, carry the shared balance value to heap
> scan and disable the shared costing */
>  if (VacuumSharedCostBalance)
> {
>  VacuumCostBalance = pg_atomic_read_u32(VacuumSharedCostBalance);
>  VacuumSharedCostBalance = NULL;
>  VacuumActiveNWorkers = NULL;
> }
>

makes sense to me, will change.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Fri, Jan 17, 2020 at 9:36 AM Dilip Kumar  wrote:
>
> On Thu, Jan 16, 2020 at 5:34 PM Amit Kapila  wrote:
> >
> > On Thu, Jan 16, 2020 at 4:46 PM Masahiko Sawada
> >  wrote:
> > >
> > > Right. Most indexes (all?) of tables that are used in the regression
> > > tests are smaller than min_parallel_index_scan_size. And we set
> > > min_parallel_index_scan_size to 0 in vacuum.sql but VACUUM would not
> > > be speeded-up much because of the relation size. Since we instead
> > > populate new table for parallel vacuum testing the regression test for
> > > vacuum would take a longer time.
> > >
> >
> > Fair enough and I think it is good in a way that it won't change the
> > coverage of existing vacuum code.  I have fixed all the issues
> > reported by Mahendra and have fixed a few other cosmetic things in the
> > attached patch.
> >
> I have few small comments.
>
> 1.
> logical streaming for large in-progress transactions+
> + /* Can't perform vacuum in parallel */
> + if (parallel_workers <= 0)
> + {
> + pfree(can_parallel_vacuum);
> + return lps;
> + }
>
> why are we checking parallel_workers <= 0, Function
> compute_parallel_vacuum_workers only returns 0 or greater than 0
> so isn't it better to just check if (parallel_workers == 0) ?
>
> 2.
> +/*
> + * Macro to check if we are in a parallel vacuum.  If true, we are in the
> + * parallel mode and the DSM segment is initialized.
> + */
> +#define ParallelVacuumIsActive(lps) (((LVParallelState *) (lps)) != NULL)
>
> (LVParallelState *) (lps) -> this typecast is not required, just (lps)
> != NULL should be enough.
>
> 3.
>
> + shared->offset = MAXALIGN(add_size(SizeOfLVShared, BITMAPLEN(nindexes)));
> + prepare_index_statistics(shared, can_parallel_vacuum, nindexes);
> + pg_atomic_init_u32(&(shared->idx), 0);
> + pg_atomic_init_u32(&(shared->cost_balance), 0);
> + pg_atomic_init_u32(&(shared->active_nworkers), 0);
>
> I think it will look cleaner if we can initialize in the order they
> are declared in structure.
>
> 4.
> + VacuumSharedCostBalance = &(lps->lvshared->cost_balance);
> + VacuumActiveNWorkers = &(lps->lvshared->active_nworkers);
> +
> + /*
> + * Set up shared cost balance and the number of active workers for
> + * vacuum delay.
> + */
> + pg_atomic_write_u32(VacuumSharedCostBalance, VacuumCostBalance);
> + pg_atomic_write_u32(VacuumActiveNWorkers, 0);
> +
> + /*
> + * The number of workers can vary between bulkdelete and cleanup
> + * phase.
> + */
> + ReinitializeParallelWorkers(lps->pcxt, nworkers);
> +
> + LaunchParallelWorkers(lps->pcxt);
> +
> + if (lps->pcxt->nworkers_launched > 0)
> + {
> + /*
> + * Reset the local cost values for leader backend as we have
> + * already accumulated the remaining balance of heap.
> + */
> + VacuumCostBalance = 0;
> + VacuumCostBalanceLocal = 0;
> + }
> + else
> + {
> + /*
> + * Disable shared cost balance if we are not able to launch
> + * workers.
> + */
> + VacuumSharedCostBalance = NULL;
> + VacuumActiveNWorkers = NULL;
> + }
> +
>
> I don't like the idea of first initializing the
> VacuumSharedCostBalance with lps->lvshared->cost_balance and then
> uninitialize if nworkers_launched is 0.
> I am not sure why do we need to initialize VacuumSharedCostBalance
> here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
> VacuumCostBalance);?
> I think we can initialize it only if nworkers_launched > 0 then we can
> get rid of the else branch completely.

I missed one of my comment

+ /* Carry the shared balance value to heap scan */
+ if (VacuumSharedCostBalance)
+ VacuumCostBalance = pg_atomic_read_u32(VacuumSharedCostBalance);
+
+ if (nworkers > 0)
+ {
+ /* Disable shared cost balance */
+ VacuumSharedCostBalance = NULL;
+ VacuumActiveNWorkers = NULL;
+ }

Doesn't make sense to keep them as two conditions, we can combine them as below

/* If shared costing is enable, carry the shared balance value to heap
scan and disable the shared costing */
 if (VacuumSharedCostBalance)
{
 VacuumCostBalance = pg_atomic_read_u32(VacuumSharedCostBalance);
 VacuumSharedCostBalance = NULL;
 VacuumActiveNWorkers = NULL;
}

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Dilip Kumar
On Thu, Jan 16, 2020 at 5:34 PM Amit Kapila  wrote:
>
> On Thu, Jan 16, 2020 at 4:46 PM Masahiko Sawada
>  wrote:
> >
> > Right. Most indexes (all?) of tables that are used in the regression
> > tests are smaller than min_parallel_index_scan_size. And we set
> > min_parallel_index_scan_size to 0 in vacuum.sql but VACUUM would not
> > be speeded-up much because of the relation size. Since we instead
> > populate new table for parallel vacuum testing the regression test for
> > vacuum would take a longer time.
> >
>
> Fair enough and I think it is good in a way that it won't change the
> coverage of existing vacuum code.  I have fixed all the issues
> reported by Mahendra and have fixed a few other cosmetic things in the
> attached patch.
>
I have few small comments.

1.
logical streaming for large in-progress transactions+
+ /* Can't perform vacuum in parallel */
+ if (parallel_workers <= 0)
+ {
+ pfree(can_parallel_vacuum);
+ return lps;
+ }

why are we checking parallel_workers <= 0, Function
compute_parallel_vacuum_workers only returns 0 or greater than 0
so isn't it better to just check if (parallel_workers == 0) ?

2.
+/*
+ * Macro to check if we are in a parallel vacuum.  If true, we are in the
+ * parallel mode and the DSM segment is initialized.
+ */
+#define ParallelVacuumIsActive(lps) (((LVParallelState *) (lps)) != NULL)

(LVParallelState *) (lps) -> this typecast is not required, just (lps)
!= NULL should be enough.

3.

+ shared->offset = MAXALIGN(add_size(SizeOfLVShared, BITMAPLEN(nindexes)));
+ prepare_index_statistics(shared, can_parallel_vacuum, nindexes);
+ pg_atomic_init_u32(&(shared->idx), 0);
+ pg_atomic_init_u32(&(shared->cost_balance), 0);
+ pg_atomic_init_u32(&(shared->active_nworkers), 0);

I think it will look cleaner if we can initialize in the order they
are declared in structure.

4.
+ VacuumSharedCostBalance = &(lps->lvshared->cost_balance);
+ VacuumActiveNWorkers = &(lps->lvshared->active_nworkers);
+
+ /*
+ * Set up shared cost balance and the number of active workers for
+ * vacuum delay.
+ */
+ pg_atomic_write_u32(VacuumSharedCostBalance, VacuumCostBalance);
+ pg_atomic_write_u32(VacuumActiveNWorkers, 0);
+
+ /*
+ * The number of workers can vary between bulkdelete and cleanup
+ * phase.
+ */
+ ReinitializeParallelWorkers(lps->pcxt, nworkers);
+
+ LaunchParallelWorkers(lps->pcxt);
+
+ if (lps->pcxt->nworkers_launched > 0)
+ {
+ /*
+ * Reset the local cost values for leader backend as we have
+ * already accumulated the remaining balance of heap.
+ */
+ VacuumCostBalance = 0;
+ VacuumCostBalanceLocal = 0;
+ }
+ else
+ {
+ /*
+ * Disable shared cost balance if we are not able to launch
+ * workers.
+ */
+ VacuumSharedCostBalance = NULL;
+ VacuumActiveNWorkers = NULL;
+ }
+

I don't like the idea of first initializing the
VacuumSharedCostBalance with lps->lvshared->cost_balance and then
uninitialize if nworkers_launched is 0.
I am not sure why do we need to initialize VacuumSharedCostBalance
here? just to perform pg_atomic_write_u32(VacuumSharedCostBalance,
VacuumCostBalance);?
I think we can initialize it only if nworkers_launched > 0 then we can
get rid of the else branch completely.


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Prabhat Sahu
Hi all,

I would like to share my observation on this PG feature "Block-level
parallel vacuum".
I have tested the earlier patch (i.e v48) with below high-level test
scenarios, and those are working as expected.

   - I have played around with these GUC parameters  while testing

max_worker_processes

autovacuum = off


shared_buffers

max_parallel_workers

max_parallel_maintenance_workers

min_parallel_index_scan_size

vacuum_cost_limit

vacuum_cost_delay


   - Tested the parallel vacuum with tables and Partition tables having
   possible datatypes and Columns having various indexes(like btree, gist,
   etc.) on part / full table.
   - Tested the pgbench tables data with multiple indexes created manually
   and ran script(vacuum_test.sql) with DMLs and VACUUM for multiple Clients,
   Jobs, and Time as below.

./pgbench  -c 8 -j 16 -T 900  postgres -f vacuum_test.sql

We observe the usage of parallel workers during VACUUM.


   - Ran few isolation schedule test cases(in regression) with huge data
   and indexes, perform DMLs -> VACUUM
   - Tested with PARTITION TABLEs -> global/local indexes ->  DMLs -> VACUUM
   - Tested with PARTITION TABLE having different TABLESPACE in different
   location -> global/local indexes -> DMLs -> VACUUM
   - With Changing STORAGE options for columns(as PLAIN / EXTERNAL /
   EXTENDED)  -> DMLs -> VACUUM
   - Create index with CONCURRENTLY option / Changing storage_parameter for
   index as below  -> DMLs -> VACUUM

with(buffering=auto) / with(buffering=on) / with(buffering=off) /
with(fillfactor=30);


   - Tested with creating Simple and Partitioned tables ->  DMLs  ->
   pg_dump/pg_restore/pg_upgrade -> VACUUM

Verified the data after restore / upgrade / VACUUM.


   - Indexes on UUID-OSSP data ->  DMLs -> pg_upgrade -> VACUUM
   - Verified with various test scenarios for better performance of
   parallel VACUUM as compared to Non-parallel VACUUM.

 Time taken by VACUUM on PG HEAD+PATCH(with PARALLEL) <  Time taken
by VACUUM on PG HEAD (without PARALLEL)


Machine configuration: (16 VCPUs / RAM: 16GB / Disk size: 640GB)

*PG HEAD:*
VACUUM tab1;

Time: 38915.384 ms (00:38.915)

Time: 48389.006 ms (00:48.389)

Time: 41324.223 ms (00:41.324)

*Time: 37640.874 ms (00:37.641) --median*

Time: 36897.325 ms (00:36.897)

Time: 36351.022 ms (00:36.351)

Time: 36198.890 ms (00:36.199)


*PG HEAD + v48 Patch:*
VACUUM tab1;

Time: 37051.589 ms (00:37.052)

*Time: 33647.459 ms (00:33.647) --median*

Time: 31580.894 ms (00:31.581)

Time: 34442.046 ms (00:34.442)

Time: 31335.960 ms (00:31.336)

Time: 34441.245 ms (00:34.441)

Time: 31159.639 ms (00:31.160)



-- 

With Regards,
Prabhat Kumar Sahu
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Amit Kapila
On Thu, Jan 16, 2020 at 4:46 PM Masahiko Sawada
 wrote:
>
> Right. Most indexes (all?) of tables that are used in the regression
> tests are smaller than min_parallel_index_scan_size. And we set
> min_parallel_index_scan_size to 0 in vacuum.sql but VACUUM would not
> be speeded-up much because of the relation size. Since we instead
> populate new table for parallel vacuum testing the regression test for
> vacuum would take a longer time.
>

Fair enough and I think it is good in a way that it won't change the
coverage of existing vacuum code.  I have fixed all the issues
reported by Mahendra and have fixed a few other cosmetic things in the
attached patch.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v49-0001-Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-16 Thread Masahiko Sawada
On Thu, 16 Jan 2020 at 14:11, Amit Kapila  wrote:
>
> On Thu, Jan 16, 2020 at 10:11 AM Mahendra Singh Thalor
>  wrote:
> >
> > On Thu, 16 Jan 2020 at 08:22, Amit Kapila  wrote:
> > >
> > > > 2.
> > > > I checked time taken by vacuum.sql test. Execution time is almost same
> > > > with and without v45 patch.
> > > >
> > > > Without v45 patch:
> > > > Run1) vacuum   ... ok 701 ms
> > > > Run2) vacuum   ... ok 549 ms
> > > > Run3) vacuum   ... ok 559 ms
> > > > Run4) vacuum   ... ok 480 ms
> > > >
> > > > With v45 patch:
> > > > Run1) vacuum   ... ok 842 ms
> > > > Run2) vacuum   ... ok 808 ms
> > > > Run3)  vacuum   ... ok 774 ms
> > > > Run4) vacuum   ... ok 792 ms
> > > >
> > >
> > > I see some variance in results, have you run with autovacuum as off.
> > > I was expecting that this might speed up some cases where parallel
> > > vacuum is used by default.
> >
> > I think, this is expected difference in timing because we are adding
> > some vacuum related test. I am not starting server manually(means I am
> > starting server with only default setting).
> >
>
> Can you once test by setting autovacuum = off?  The autovacuum leads
> to variability in test timing.
>
>

I've also run the regression tests with and without the patch:

* w/o patch and autovacuum = on:  255 ms
* w/o patch and autovacuum = off: 258 ms
* w/ patch and autovacuum = on: 370 ms
* w/ patch and autovacuum = off: 375 ms

> > If we start server with default settings, then we will not hit vacuum
> > related test cases to parallel because size of index relation is very
> > small so we will not do parallel vacuum.

Right. Most indexes (all?) of tables that are used in the regression
tests are smaller than min_parallel_index_scan_size. And we set
min_parallel_index_scan_size to 0 in vacuum.sql but VACUUM would not
be speeded-up much because of the relation size. Since we instead
populate new table for parallel vacuum testing the regression test for
vacuum would take a longer time.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Amit Kapila
On Thu, Jan 16, 2020 at 10:11 AM Mahendra Singh Thalor
 wrote:
>
> On Thu, 16 Jan 2020 at 08:22, Amit Kapila  wrote:
> >
> > > 2.
> > > I checked time taken by vacuum.sql test. Execution time is almost same
> > > with and without v45 patch.
> > >
> > > Without v45 patch:
> > > Run1) vacuum   ... ok 701 ms
> > > Run2) vacuum   ... ok 549 ms
> > > Run3) vacuum   ... ok 559 ms
> > > Run4) vacuum   ... ok 480 ms
> > >
> > > With v45 patch:
> > > Run1) vacuum   ... ok 842 ms
> > > Run2) vacuum   ... ok 808 ms
> > > Run3)  vacuum   ... ok 774 ms
> > > Run4) vacuum   ... ok 792 ms
> > >
> >
> > I see some variance in results, have you run with autovacuum as off.
> > I was expecting that this might speed up some cases where parallel
> > vacuum is used by default.
>
> I think, this is expected difference in timing because we are adding
> some vacuum related test. I am not starting server manually(means I am
> starting server with only default setting).
>

Can you once test by setting autovacuum = off?  The autovacuum leads
to variability in test timing.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Mahendra Singh Thalor
On Thu, 16 Jan 2020 at 08:22, Amit Kapila  wrote:
>
> On Thu, Jan 16, 2020 at 1:02 AM Mahendra Singh Thalor
>  wrote:
> >
> > On Wed, 15 Jan 2020 at 19:31, Mahendra Singh Thalor  
> > wrote:
> > >
> > > On Wed, 15 Jan 2020 at 19:04, Mahendra Singh Thalor  
> > > wrote:
> > > >
> > > >
> > > > I reviewed v48 patch and below are some comments.
> > > >
> > > > 1.
> > > > +* based on the number of indexes.  -1 indicates a parallel vacuum 
> > > > is
> > > >
> > > > I think, above should be like "-1 indicates that parallel vacuum is"
> > > >
>
> I am not an expert in this matter, but I am not sure if your
> suggestion is correct.  I thought an article is required here, but I
> could be wrong.  Can you please clarify?
>
> > > > 2.
> > > > +/* Variables for cost-based parallel vacuum  */
> > > >
> > > > At the end of comment, there is 2 spaces.  I think, it should be only 1 
> > > > space.
> > > >
> > > > 3.
> > > > I think, we should add a test case for parallel option(when degree is 
> > > > not specified).
> > > > Ex:
> > > > postgres=# VACUUM (PARALLEL) tmp;
> > > > ERROR:  parallel option requires a value between 0 and 1024
> > > > LINE 1: VACUUM (PARALLEL) tmp;
> > > > ^
> > > > postgres=#
> > > >
> > > > Because above error is added in this parallel patch, so we should have 
> > > > test case for this to increase code coverage.
> > > >
>
> I thought about it but was not sure to add a test for it.  We might
> not want to add a test for each and every case as that will increase
> the number and time of tests without a significant advantage.  Now
> that you have pointed this, I can add a test for it unless someone
> else thinks otherwise.
>
> >
> > 1.
> > With v45 patch, compute_parallel_delay is never called so function hit
> > is zero. I think, we can add some delay options into vacuum.sql test
> > to hit function.
> >
>
> But how can we meaningfully test the functionality of the delay?  It
> would be tricky to come up with a portable test that can always
> produce consistent results.
>
> > 2.
> > I checked time taken by vacuum.sql test. Execution time is almost same
> > with and without v45 patch.
> >
> > Without v45 patch:
> > Run1) vacuum   ... ok 701 ms
> > Run2) vacuum   ... ok 549 ms
> > Run3) vacuum   ... ok 559 ms
> > Run4) vacuum   ... ok 480 ms
> >
> > With v45 patch:
> > Run1) vacuum   ... ok 842 ms
> > Run2) vacuum   ... ok 808 ms
> > Run3)  vacuum   ... ok 774 ms
> > Run4) vacuum   ... ok 792 ms
> >
>
> I see some variance in results, have you run with autovacuum as off.
> I was expecting that this might speed up some cases where parallel
> vacuum is used by default.

I think, this is expected difference in timing because we are adding
some vacuum related test. I am not starting server manually(means I am
starting server with only default setting).

If we start server with default settings, then we will not hit vacuum
related test cases to parallel because size of index relation is very
small so we will not do parallel vacuum.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Amit Kapila
On Thu, Jan 16, 2020 at 1:02 AM Mahendra Singh Thalor
 wrote:
>
> On Wed, 15 Jan 2020 at 19:31, Mahendra Singh Thalor  
> wrote:
> >
> > On Wed, 15 Jan 2020 at 19:04, Mahendra Singh Thalor  
> > wrote:
> > >
> > >
> > > I reviewed v48 patch and below are some comments.
> > >
> > > 1.
> > > +* based on the number of indexes.  -1 indicates a parallel vacuum is
> > >
> > > I think, above should be like "-1 indicates that parallel vacuum is"
> > >

I am not an expert in this matter, but I am not sure if your
suggestion is correct.  I thought an article is required here, but I
could be wrong.  Can you please clarify?

> > > 2.
> > > +/* Variables for cost-based parallel vacuum  */
> > >
> > > At the end of comment, there is 2 spaces.  I think, it should be only 1 
> > > space.
> > >
> > > 3.
> > > I think, we should add a test case for parallel option(when degree is not 
> > > specified).
> > > Ex:
> > > postgres=# VACUUM (PARALLEL) tmp;
> > > ERROR:  parallel option requires a value between 0 and 1024
> > > LINE 1: VACUUM (PARALLEL) tmp;
> > > ^
> > > postgres=#
> > >
> > > Because above error is added in this parallel patch, so we should have 
> > > test case for this to increase code coverage.
> > >

I thought about it but was not sure to add a test for it.  We might
not want to add a test for each and every case as that will increase
the number and time of tests without a significant advantage.  Now
that you have pointed this, I can add a test for it unless someone
else thinks otherwise.

>
> 1.
> With v45 patch, compute_parallel_delay is never called so function hit
> is zero. I think, we can add some delay options into vacuum.sql test
> to hit function.
>

But how can we meaningfully test the functionality of the delay?  It
would be tricky to come up with a portable test that can always
produce consistent results.

> 2.
> I checked time taken by vacuum.sql test. Execution time is almost same
> with and without v45 patch.
>
> Without v45 patch:
> Run1) vacuum   ... ok 701 ms
> Run2) vacuum   ... ok 549 ms
> Run3) vacuum   ... ok 559 ms
> Run4) vacuum   ... ok 480 ms
>
> With v45 patch:
> Run1) vacuum   ... ok 842 ms
> Run2) vacuum   ... ok 808 ms
> Run3)  vacuum   ... ok 774 ms
> Run4) vacuum   ... ok 792 ms
>

I see some variance in results, have you run with autovacuum as off.
I was expecting that this might speed up some cases where parallel
vacuum is used by default.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Mahendra Singh Thalor
On Wed, 15 Jan 2020 at 19:31, Mahendra Singh Thalor  wrote:
>
> On Wed, 15 Jan 2020 at 19:04, Mahendra Singh Thalor  
> wrote:
> >
> > On Wed, 15 Jan 2020 at 17:27, Amit Kapila  wrote:
> > >
> > > On Wed, Jan 15, 2020 at 10:05 AM Masahiko Sawada
> > >  wrote:
> > > >
> > > > Thank you for updating the patch! I have a few small comments.
> > > >
> > >
> > > I have adapted all your changes, fixed the comment by Mahendra related
> > > to initializing parallel state only when there are at least two
> > > indexes.  Additionally, I have changed a few comments (make the
> > > reference to parallel vacuum consistent, at some places we were
> > > referring it as 'parallel lazy vacuum' and at other places it was
> > > 'parallel index vacuum').
> > >
> > > > The
> > > > rest looks good to me.
> > > >
> > >
> > > Okay, I think the patch is in good shape.  I am planning to read it a
> > > few more times (at least 2 times) and then probably will commit it
> > > early next week (Monday or Tuesday) unless there are any major
> > > comments.  I have already committed the API patch (4d8a8d0c73).
> > >
> >
> > Hi,
> > Thanks Amit for fixing review comments.
> >
> > I reviewed v48 patch and below are some comments.
> >
> > 1.
> > +* based on the number of indexes.  -1 indicates a parallel vacuum is
> >
> > I think, above should be like "-1 indicates that parallel vacuum is"
> >
> > 2.
> > +/* Variables for cost-based parallel vacuum  */
> >
> > At the end of comment, there is 2 spaces.  I think, it should be only 1 
> > space.
> >
> > 3.
> > I think, we should add a test case for parallel option(when degree is not 
> > specified).
> > Ex:
> > postgres=# VACUUM (PARALLEL) tmp;
> > ERROR:  parallel option requires a value between 0 and 1024
> > LINE 1: VACUUM (PARALLEL) tmp;
> > ^
> > postgres=#
> >
> > Because above error is added in this parallel patch, so we should have test 
> > case for this to increase code coverage.
> >
>
> Hi
> Below are some more review comments for v48 patch.
>
> 1.
> #include "storage/bufpage.h"
> #include "storage/lockdefs.h"
> +#include "storage/shm_toc.h"
> +#include "storage/dsm.h"
>
> Here, order of header file is not alphabetically. (storage/dsm.h
> should come before storage/lockdefs.h)
>
> 2.
> +/* No index supports parallel vacuum */
> +if (nindexes_parallel == 0)
> +return 0;
> +
> +/* The leader process takes one index */
> +nindexes_parallel--;
>
> Above code can be rearranged as:
>
> +/* The leader process takes one index */
> +nindexes_parallel--;
> +
> +/* No index supports parallel vacuum */
> +if (nindexes_parallel <= 0)
> +return 0;
>
> If we do like this, then in some cases, we can skip some calculations
> of parallel workers.
>
> --
> Thanks and Regards
> Mahendra Singh Thalor
> EnterpriseDB: http://www.enterprisedb.com

Hi,
I checked code coverage and time taken by vacuum.sql test with and
without v48 patch. Below are some findings (I ran "make check-world
-i" to get coverage.)

1.
With v45 patch, compute_parallel_delay is never called so function hit
is zero. I think, we can add some delay options into vacuum.sql test
to hit function.

2.
I checked time taken by vacuum.sql test. Execution time is almost same
with and without v45 patch.

Without v45 patch:
Run1) vacuum   ... ok 701 ms
Run2) vacuum   ... ok 549 ms
Run3) vacuum   ... ok 559 ms
Run4) vacuum   ... ok 480 ms

With v45 patch:
Run1) vacuum   ... ok 842 ms
Run2) vacuum   ... ok 808 ms
Run3)  vacuum   ... ok 774 ms
Run4) vacuum   ... ok 792 ms

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Mahendra Singh Thalor
On Wed, 15 Jan 2020 at 19:04, Mahendra Singh Thalor  wrote:
>
> On Wed, 15 Jan 2020 at 17:27, Amit Kapila  wrote:
> >
> > On Wed, Jan 15, 2020 at 10:05 AM Masahiko Sawada
> >  wrote:
> > >
> > > Thank you for updating the patch! I have a few small comments.
> > >
> >
> > I have adapted all your changes, fixed the comment by Mahendra related
> > to initializing parallel state only when there are at least two
> > indexes.  Additionally, I have changed a few comments (make the
> > reference to parallel vacuum consistent, at some places we were
> > referring it as 'parallel lazy vacuum' and at other places it was
> > 'parallel index vacuum').
> >
> > > The
> > > rest looks good to me.
> > >
> >
> > Okay, I think the patch is in good shape.  I am planning to read it a
> > few more times (at least 2 times) and then probably will commit it
> > early next week (Monday or Tuesday) unless there are any major
> > comments.  I have already committed the API patch (4d8a8d0c73).
> >
>
> Hi,
> Thanks Amit for fixing review comments.
>
> I reviewed v48 patch and below are some comments.
>
> 1.
> +* based on the number of indexes.  -1 indicates a parallel vacuum is
>
> I think, above should be like "-1 indicates that parallel vacuum is"
>
> 2.
> +/* Variables for cost-based parallel vacuum  */
>
> At the end of comment, there is 2 spaces.  I think, it should be only 1 space.
>
> 3.
> I think, we should add a test case for parallel option(when degree is not 
> specified).
> Ex:
> postgres=# VACUUM (PARALLEL) tmp;
> ERROR:  parallel option requires a value between 0 and 1024
> LINE 1: VACUUM (PARALLEL) tmp;
> ^
> postgres=#
>
> Because above error is added in this parallel patch, so we should have test 
> case for this to increase code coverage.
>

Hi
Below are some more review comments for v48 patch.

1.
#include "storage/bufpage.h"
#include "storage/lockdefs.h"
+#include "storage/shm_toc.h"
+#include "storage/dsm.h"

Here, order of header file is not alphabetically. (storage/dsm.h
should come before storage/lockdefs.h)

2.
+/* No index supports parallel vacuum */
+if (nindexes_parallel == 0)
+return 0;
+
+/* The leader process takes one index */
+nindexes_parallel--;

Above code can be rearranged as:

+/* The leader process takes one index */
+nindexes_parallel--;
+
+/* No index supports parallel vacuum */
+if (nindexes_parallel <= 0)
+return 0;

If we do like this, then in some cases, we can skip some calculations
of parallel workers.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Mahendra Singh Thalor
On Wed, 15 Jan 2020 at 17:27, Amit Kapila  wrote:
>
> On Wed, Jan 15, 2020 at 10:05 AM Masahiko Sawada
>  wrote:
> >
> > Thank you for updating the patch! I have a few small comments.
> >
>
> I have adapted all your changes, fixed the comment by Mahendra related
> to initializing parallel state only when there are at least two
> indexes.  Additionally, I have changed a few comments (make the
> reference to parallel vacuum consistent, at some places we were
> referring it as 'parallel lazy vacuum' and at other places it was
> 'parallel index vacuum').
>
> > The
> > rest looks good to me.
> >
>
> Okay, I think the patch is in good shape.  I am planning to read it a
> few more times (at least 2 times) and then probably will commit it
> early next week (Monday or Tuesday) unless there are any major
> comments.  I have already committed the API patch (4d8a8d0c73).
>

Hi,
Thanks Amit for fixing review comments.

I reviewed v48 patch and below are some comments.

1.
+* based on the number of indexes.  -1 indicates a parallel vacuum is

I think, above should be like "-1 indicates that parallel vacuum is"

2.
+/* Variables for cost-based parallel vacuum  */

At the end of comment, there is 2 spaces.  I think, it should be only 1
space.

3.
I think, we should add a test case for parallel option(when degree is not
specified).
*Ex:*
postgres=# VACUUM (PARALLEL) tmp;
ERROR:  parallel option requires a value between 0 and 1024
LINE 1: VACUUM (PARALLEL) tmp;
^
postgres=#

Because above error is added in this parallel patch, so we should have test
case for this to increase code coverage.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-15 Thread Amit Kapila
On Wed, Jan 15, 2020 at 10:05 AM Masahiko Sawada
 wrote:
>
> Thank you for updating the patch! I have a few small comments.
>

I have adapted all your changes, fixed the comment by Mahendra related
to initializing parallel state only when there are at least two
indexes.  Additionally, I have changed a few comments (make the
reference to parallel vacuum consistent, at some places we were
referring it as 'parallel lazy vacuum' and at other places it was
'parallel index vacuum').

> The
> rest looks good to me.
>

Okay, I think the patch is in good shape.  I am planning to read it a
few more times (at least 2 times) and then probably will commit it
early next week (Monday or Tuesday) unless there are any major
comments.  I have already committed the API patch (4d8a8d0c73).

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v48-0001-Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Masahiko Sawada
On Wed, 15 Jan 2020 at 12:34, Mahendra Singh Thalor  wrote:
>
> On Tue, 14 Jan 2020 at 17:16, Mahendra Singh Thalor  
> wrote:
> >
> > On Tue, 14 Jan 2020 at 16:17, Mahendra Singh Thalor  
> > wrote:
> > >
> > > On Tue, 14 Jan 2020 at 10:06, Masahiko Sawada
> > >  wrote:
> > > >
> > > > On Tue, 14 Jan 2020 at 03:20, Mahendra Singh Thalor 
> > > >  wrote:
> > > > >
> > > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > > >
> > > > > > Hi
> > > > > > Thank you for update! I looked again
> > > > > >
> > > > > > (vacuum_indexes_leader)
> > > > > > +   /* Skip the indexes that can be processed by 
> > > > > > parallel workers */
> > > > > > +   if (!skip_index)
> > > > > > +   continue;
> > > > > >
> > > > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > > > something like can_parallel?
> > > > > >
> > > > >
> > > > > Again I looked into code and thought that somehow if we can add a
> > > > > boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
> > > > > identify that this index is supporting parallel vacuum or not, then it
> > > > > will be easy to skip those indexes and multiple time we will not call
> > > > > skip_parallel_vacuum_index (from vacuum_indexes_leader and
> > > > > parallel_vacuum_index)
> > > > > We can have a linked list of non-parallel supported indexes, then
> > > > > directly we can pass to vacuum_indexes_leader.
> > > > >
> > > > > Ex: let suppose we have 5 indexes into a table.  If before launching
> > > > > parallel workers, if we can add boolean flag(can_parallel)
> > > > > IndexBulkDeleteResult structure to identify that this index is
> > > > > supporting parallel vacuum or not.
> > > > > Let index 1, 4 are not supporting parallel vacuum so we already have
> > > > > info in a linked list that 1->4 are not supporting parallel vacuum, so
> > > > > parallel_vacuum_index will process these indexes and rest will be
> > > > > processed by parallel workers. If parallel worker found that
> > > > > can_parallel is false, then it will skip that index.
> > > > >
> > > > > As per my understanding, if we implement this, then we can avoid
> > > > > multiple function calling of skip_parallel_vacuum_index and if there
> > > > > is no index which can't  performe parallel vacuum, then we will not
> > > > > call vacuum_indexes_leader as head of list pointing to null. (we can
> > > > > save unnecessary calling of vacuum_indexes_leader)
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > > > We skip not only indexes that don't support parallel index vacuum but
> > > > also indexes supporting it depending on vacuum phase. That is, we
> > > > could skip different indexes at different vacuum phase. Therefore with
> > > > your idea, we would need to have at least three linked lists for each
> > > > possible vacuum phase(bulkdelete, conditional cleanup and cleanup), is
> > > > that right?
> > > >
> > > > I think we can check if there are indexes that should be processed by
> > > > the leader process before entering the loop in vacuum_indexes_leader
> > > > by comparing nindexes_parallel_XXX of LVParallelState to the number of
> > > > indexes but I'm not sure it's effective since the number of indexes on
> > > > a table should be small.
> > > >
> > >
> > > Hi,
> > >
> > > +/*
> > > + * Try to initialize the parallel vacuum if requested
> > > + */
> > > +if (params->nworkers >= 0 && vacrelstats->useindex)
> > > +{
> > > +/*
> > > + * Since parallel workers cannot access data in temporary 
> > > tables, we
> > > + * can't perform parallel vacuum on them.
> > > + */
> > > +if (RelationUsesLocalBuffers(onerel))
> > > +{
> > > +/*
> > > + * Give warning only if the user explicitly tries to perform 
> > > a
> > > + * parallel vacuum on the temporary table.
> > > + */
> > > +if (params->nworkers > 0)
> > > +ereport(WARNING,
> > > +(errmsg("disabling parallel option of vacuum
> > > on \"%s\" --- cannot vacuum temporary tables in parallel",
> > >
> > > From v45 patch, we moved warning of temporary table into
> > > "params->nworkers >= 0 && vacrelstats->useindex)" check so if table
> > > don't have any index, then we are not giving any warning. I think, we
> > > should give warning for all the temporary tables if parallel degree is
> > > given. (Till v44 patch, we were giving warning for all the temporary
> > > tables(having index and without index))
> > >
> > > Thoughts?
> >
> > Hi,
> > I did some more review.  Below is the 1 review comment for v46-0002.
> >
> > +/*
> > + * Initialize the state for parallel vacuum
> > + */
> > +if (params->nworkers >= 0 && vacrelstats->useindex)
> > +{
> > +/*
> > + * Since parallel workers cannot access data in temporary tables, 
> > we
> > + * can't perform parallel vacuum 

Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Masahiko Sawada
On Tue, 14 Jan 2020 at 21:43, Amit Kapila  wrote:
>
> On Tue, Jan 14, 2020 at 10:04 AM Masahiko Sawada
>  wrote:
> >
> > On Mon, 13 Jan 2020 at 12:50, Amit Kapila  wrote:
> > >
> > > On Sat, Jan 11, 2020 at 7:48 PM Masahiko Sawada
> > >  wrote:
> > >
> > > Okay, would it better if we get rid of this variable and have code like 
> > > below?
> > >
> > > /* Skip the indexes that can be processed by parallel workers */
> > > if ( !(get_indstats(lps->lvshared, i) == NULL ||
> > > skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
> > > continue;
> >
> > Make sense to me.
> >
>
> I have changed the comment and condition to make it a positive test so
> that it is more clear.
>
> > > ...
> > > > Agreed. But with the updated patch the PARALLEL option without the
> > > > parallel degree doesn't display warning because params->nworkers = 0
> > > > in that case. So how about restoring params->nworkers at the end of
> > > > vacuum_rel()?
> > > >
> > >
> > > I had also thought on those lines, but I was not entirely sure about
> > > this resetting of workers.  Today, again thinking about it, it seems
> > > the idea Mahendra is suggesting that is giving an error if the
> > > parallel degree is not specified seems reasonable to me.  This means
> > > Vacuum (parallel), Vacuum (parallel) , etc. will give an
> > > error "parallel degree must be specified".  This idea has merit as now
> > > we are supporting a parallel vacuum by default, so a 'parallel' option
> > > without a parallel degree doesn't have any meaning.  If we do that,
> > > then we don't need to do anything additional about the handling of
> > > temp tables (other than what patch is already doing) as well.  What do
> > > you think?
> > >
> >
> > Good point! Agreed.
> >
>
> Thanks, changed accordingly.
>

Thank you for updating the patch! I have a few small comments. The
rest looks good to me.

1.
+ * Compute the number of parallel worker processes to request.  Both index
+ * vacuum and index cleanup can be executed with parallel workers.  The
+ * relation size of the table don't affect the parallel degree for now.

s/don't/doesn't/

2.
@@ -383,6 +435,7 @@ vacuum(List *relations, VacuumParams *params,
VacuumPageHit = 0;
VacuumPageMiss = 0;
VacuumPageDirty = 0;
+   VacuumSharedCostBalance = NULL;

I think we can initialize VacuumCostBalanceLocal and
VacuumActiveNWorkers here. We use these parameters during parallel
index vacuum and reset at the end but we might want to initialize them
for safety.

3.
+   /* Set cost-based vacuum delay */
+   VacuumCostActive = (VacuumCostDelay > 0);
+   VacuumCostBalance = 0;
+   VacuumPageHit = 0;
+   VacuumPageMiss = 0;
+   VacuumPageDirty = 0;
+   VacuumSharedCostBalance = &(lvshared->cost_balance);
+   VacuumActiveNWorkers = &(lvshared->active_nworkers);

VacuumCostBalanceLocal also needs to be initialized.

4.
The regression tests don't have the test case of PARALLEL 0.

Since I guess you already modifies the code locally I've attached the
diff containing the above review comments.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


review_v47_masahiko.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Mahendra Singh Thalor
On Tue, 14 Jan 2020 at 17:16, Mahendra Singh Thalor  wrote:
>
> On Tue, 14 Jan 2020 at 16:17, Mahendra Singh Thalor  
> wrote:
> >
> > On Tue, 14 Jan 2020 at 10:06, Masahiko Sawada
> >  wrote:
> > >
> > > On Tue, 14 Jan 2020 at 03:20, Mahendra Singh Thalor  
> > > wrote:
> > > >
> > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > >
> > > > > Hi
> > > > > Thank you for update! I looked again
> > > > >
> > > > > (vacuum_indexes_leader)
> > > > > +   /* Skip the indexes that can be processed by parallel 
> > > > > workers */
> > > > > +   if (!skip_index)
> > > > > +   continue;
> > > > >
> > > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > > something like can_parallel?
> > > > >
> > > >
> > > > Again I looked into code and thought that somehow if we can add a
> > > > boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
> > > > identify that this index is supporting parallel vacuum or not, then it
> > > > will be easy to skip those indexes and multiple time we will not call
> > > > skip_parallel_vacuum_index (from vacuum_indexes_leader and
> > > > parallel_vacuum_index)
> > > > We can have a linked list of non-parallel supported indexes, then
> > > > directly we can pass to vacuum_indexes_leader.
> > > >
> > > > Ex: let suppose we have 5 indexes into a table.  If before launching
> > > > parallel workers, if we can add boolean flag(can_parallel)
> > > > IndexBulkDeleteResult structure to identify that this index is
> > > > supporting parallel vacuum or not.
> > > > Let index 1, 4 are not supporting parallel vacuum so we already have
> > > > info in a linked list that 1->4 are not supporting parallel vacuum, so
> > > > parallel_vacuum_index will process these indexes and rest will be
> > > > processed by parallel workers. If parallel worker found that
> > > > can_parallel is false, then it will skip that index.
> > > >
> > > > As per my understanding, if we implement this, then we can avoid
> > > > multiple function calling of skip_parallel_vacuum_index and if there
> > > > is no index which can't  performe parallel vacuum, then we will not
> > > > call vacuum_indexes_leader as head of list pointing to null. (we can
> > > > save unnecessary calling of vacuum_indexes_leader)
> > > >
> > > > Thoughts?
> > > >
> > >
> > > We skip not only indexes that don't support parallel index vacuum but
> > > also indexes supporting it depending on vacuum phase. That is, we
> > > could skip different indexes at different vacuum phase. Therefore with
> > > your idea, we would need to have at least three linked lists for each
> > > possible vacuum phase(bulkdelete, conditional cleanup and cleanup), is
> > > that right?
> > >
> > > I think we can check if there are indexes that should be processed by
> > > the leader process before entering the loop in vacuum_indexes_leader
> > > by comparing nindexes_parallel_XXX of LVParallelState to the number of
> > > indexes but I'm not sure it's effective since the number of indexes on
> > > a table should be small.
> > >
> >
> > Hi,
> >
> > +/*
> > + * Try to initialize the parallel vacuum if requested
> > + */
> > +if (params->nworkers >= 0 && vacrelstats->useindex)
> > +{
> > +/*
> > + * Since parallel workers cannot access data in temporary tables, 
> > we
> > + * can't perform parallel vacuum on them.
> > + */
> > +if (RelationUsesLocalBuffers(onerel))
> > +{
> > +/*
> > + * Give warning only if the user explicitly tries to perform a
> > + * parallel vacuum on the temporary table.
> > + */
> > +if (params->nworkers > 0)
> > +ereport(WARNING,
> > +(errmsg("disabling parallel option of vacuum
> > on \"%s\" --- cannot vacuum temporary tables in parallel",
> >
> > From v45 patch, we moved warning of temporary table into
> > "params->nworkers >= 0 && vacrelstats->useindex)" check so if table
> > don't have any index, then we are not giving any warning. I think, we
> > should give warning for all the temporary tables if parallel degree is
> > given. (Till v44 patch, we were giving warning for all the temporary
> > tables(having index and without index))
> >
> > Thoughts?
>
> Hi,
> I did some more review.  Below is the 1 review comment for v46-0002.
>
> +/*
> + * Initialize the state for parallel vacuum
> + */
> +if (params->nworkers >= 0 && vacrelstats->useindex)
> +{
> +/*
> + * Since parallel workers cannot access data in temporary tables, we
> + * can't perform parallel vacuum on them.
> + */
> +if (RelationUsesLocalBuffers(onerel)
>
> In above check, we should add "nindexes > 1" check so that if there is only 1 
> index, then we will not call begin_parallel_vacuum.

I think, " if (params->nworkers >= 0 && nindexes > 1)" check will be
enough here .


Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Amit Kapila
On Tue, Jan 14, 2020 at 4:17 PM Mahendra Singh Thalor
 wrote:
>
> Hi,
>
> +/*
> + * Try to initialize the parallel vacuum if requested
> + */
> +if (params->nworkers >= 0 && vacrelstats->useindex)
> +{
> +/*
> + * Since parallel workers cannot access data in temporary tables, we
> + * can't perform parallel vacuum on them.
> + */
> +if (RelationUsesLocalBuffers(onerel))
> +{
> +/*
> + * Give warning only if the user explicitly tries to perform a
> + * parallel vacuum on the temporary table.
> + */
> +if (params->nworkers > 0)
> +ereport(WARNING,
> +(errmsg("disabling parallel option of vacuum
> on \"%s\" --- cannot vacuum temporary tables in parallel",
>
> From v45 patch, we moved warning of temporary table into
> "params->nworkers >= 0 && vacrelstats->useindex)" check so if table
> don't have any index, then we are not giving any warning. I think, we
> should give warning for all the temporary tables if parallel degree is
> given. (Till v44 patch, we were giving warning for all the temporary
> tables(having index and without index))
>

I am not sure how useful it is to give WARNING in this case as we are
anyway not going to perform a parallel vacuum because it doesn't have
an index?  One can also say that WARNING is expected in the cases
where we skip a parallel vacuum due to any reason (ex., if the size of
the index is small), but I don't think that will be a good idea.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Amit Kapila
On Tue, Jan 14, 2020 at 10:04 AM Masahiko Sawada
 wrote:
>
> On Mon, 13 Jan 2020 at 12:50, Amit Kapila  wrote:
> >
> > On Sat, Jan 11, 2020 at 7:48 PM Masahiko Sawada
> >  wrote:
> >
> > Okay, would it better if we get rid of this variable and have code like 
> > below?
> >
> > /* Skip the indexes that can be processed by parallel workers */
> > if ( !(get_indstats(lps->lvshared, i) == NULL ||
> > skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
> > continue;
>
> Make sense to me.
>

I have changed the comment and condition to make it a positive test so
that it is more clear.

> > ...
> > > Agreed. But with the updated patch the PARALLEL option without the
> > > parallel degree doesn't display warning because params->nworkers = 0
> > > in that case. So how about restoring params->nworkers at the end of
> > > vacuum_rel()?
> > >
> >
> > I had also thought on those lines, but I was not entirely sure about
> > this resetting of workers.  Today, again thinking about it, it seems
> > the idea Mahendra is suggesting that is giving an error if the
> > parallel degree is not specified seems reasonable to me.  This means
> > Vacuum (parallel), Vacuum (parallel) , etc. will give an
> > error "parallel degree must be specified".  This idea has merit as now
> > we are supporting a parallel vacuum by default, so a 'parallel' option
> > without a parallel degree doesn't have any meaning.  If we do that,
> > then we don't need to do anything additional about the handling of
> > temp tables (other than what patch is already doing) as well.  What do
> > you think?
> >
>
> Good point! Agreed.
>

Thanks, changed accordingly.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v47-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
Description: Binary data


v47-0002-Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Mahendra Singh Thalor
On Tue, 14 Jan 2020 at 16:17, Mahendra Singh Thalor 
wrote:
>
> On Tue, 14 Jan 2020 at 10:06, Masahiko Sawada
>  wrote:
> >
> > On Tue, 14 Jan 2020 at 03:20, Mahendra Singh Thalor 
wrote:
> > >
> > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > >
> > > > Hi
> > > > Thank you for update! I looked again
> > > >
> > > > (vacuum_indexes_leader)
> > > > +   /* Skip the indexes that can be processed by
parallel workers */
> > > > +   if (!skip_index)
> > > > +   continue;
> > > >
> > > > Does the variable name skip_index not confuse here? Maybe rename to
something like can_parallel?
> > > >
> > >
> > > Again I looked into code and thought that somehow if we can add a
> > > boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
> > > identify that this index is supporting parallel vacuum or not, then it
> > > will be easy to skip those indexes and multiple time we will not call
> > > skip_parallel_vacuum_index (from vacuum_indexes_leader and
> > > parallel_vacuum_index)
> > > We can have a linked list of non-parallel supported indexes, then
> > > directly we can pass to vacuum_indexes_leader.
> > >
> > > Ex: let suppose we have 5 indexes into a table.  If before launching
> > > parallel workers, if we can add boolean flag(can_parallel)
> > > IndexBulkDeleteResult structure to identify that this index is
> > > supporting parallel vacuum or not.
> > > Let index 1, 4 are not supporting parallel vacuum so we already have
> > > info in a linked list that 1->4 are not supporting parallel vacuum, so
> > > parallel_vacuum_index will process these indexes and rest will be
> > > processed by parallel workers. If parallel worker found that
> > > can_parallel is false, then it will skip that index.
> > >
> > > As per my understanding, if we implement this, then we can avoid
> > > multiple function calling of skip_parallel_vacuum_index and if there
> > > is no index which can't  performe parallel vacuum, then we will not
> > > call vacuum_indexes_leader as head of list pointing to null. (we can
> > > save unnecessary calling of vacuum_indexes_leader)
> > >
> > > Thoughts?
> > >
> >
> > We skip not only indexes that don't support parallel index vacuum but
> > also indexes supporting it depending on vacuum phase. That is, we
> > could skip different indexes at different vacuum phase. Therefore with
> > your idea, we would need to have at least three linked lists for each
> > possible vacuum phase(bulkdelete, conditional cleanup and cleanup), is
> > that right?
> >
> > I think we can check if there are indexes that should be processed by
> > the leader process before entering the loop in vacuum_indexes_leader
> > by comparing nindexes_parallel_XXX of LVParallelState to the number of
> > indexes but I'm not sure it's effective since the number of indexes on
> > a table should be small.
> >
>
> Hi,
>
> +/*
> + * Try to initialize the parallel vacuum if requested
> + */
> +if (params->nworkers >= 0 && vacrelstats->useindex)
> +{
> +/*
> + * Since parallel workers cannot access data in temporary
tables, we
> + * can't perform parallel vacuum on them.
> + */
> +if (RelationUsesLocalBuffers(onerel))
> +{
> +/*
> + * Give warning only if the user explicitly tries to perform
a
> + * parallel vacuum on the temporary table.
> + */
> +if (params->nworkers > 0)
> +ereport(WARNING,
> +(errmsg("disabling parallel option of vacuum
> on \"%s\" --- cannot vacuum temporary tables in parallel",
>
> From v45 patch, we moved warning of temporary table into
> "params->nworkers >= 0 && vacrelstats->useindex)" check so if table
> don't have any index, then we are not giving any warning. I think, we
> should give warning for all the temporary tables if parallel degree is
> given. (Till v44 patch, we were giving warning for all the temporary
> tables(having index and without index))
>
> Thoughts?

Hi,
I did some more review.  Below is the 1 review comment for v46-0002.

+/*
+ * Initialize the state for parallel vacuum
+ */
+if (params->nworkers >= 0 && vacrelstats->useindex)
+{
+/*
+ * Since parallel workers cannot access data in temporary tables,
we
+ * can't perform parallel vacuum on them.
+ */
+if (RelationUsesLocalBuffers(onerel)

In above check, we should add "nindexes > 1" check so that if there is only
1 index, then we will not call begin_parallel_vacuum.

"Initialize the state for parallel vacuum",we can improve this comment by
mentioning that what are doing here. (If table has more than index and
parallel vacuum is requested, then try to start parallel vacuum).

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-14 Thread Mahendra Singh Thalor
On Tue, 14 Jan 2020 at 10:06, Masahiko Sawada
 wrote:
>
> On Tue, 14 Jan 2020 at 03:20, Mahendra Singh Thalor  
> wrote:
> >
> > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > >
> > > Hi
> > > Thank you for update! I looked again
> > >
> > > (vacuum_indexes_leader)
> > > +   /* Skip the indexes that can be processed by parallel 
> > > workers */
> > > +   if (!skip_index)
> > > +   continue;
> > >
> > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > something like can_parallel?
> > >
> >
> > Again I looked into code and thought that somehow if we can add a
> > boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
> > identify that this index is supporting parallel vacuum or not, then it
> > will be easy to skip those indexes and multiple time we will not call
> > skip_parallel_vacuum_index (from vacuum_indexes_leader and
> > parallel_vacuum_index)
> > We can have a linked list of non-parallel supported indexes, then
> > directly we can pass to vacuum_indexes_leader.
> >
> > Ex: let suppose we have 5 indexes into a table.  If before launching
> > parallel workers, if we can add boolean flag(can_parallel)
> > IndexBulkDeleteResult structure to identify that this index is
> > supporting parallel vacuum or not.
> > Let index 1, 4 are not supporting parallel vacuum so we already have
> > info in a linked list that 1->4 are not supporting parallel vacuum, so
> > parallel_vacuum_index will process these indexes and rest will be
> > processed by parallel workers. If parallel worker found that
> > can_parallel is false, then it will skip that index.
> >
> > As per my understanding, if we implement this, then we can avoid
> > multiple function calling of skip_parallel_vacuum_index and if there
> > is no index which can't  performe parallel vacuum, then we will not
> > call vacuum_indexes_leader as head of list pointing to null. (we can
> > save unnecessary calling of vacuum_indexes_leader)
> >
> > Thoughts?
> >
>
> We skip not only indexes that don't support parallel index vacuum but
> also indexes supporting it depending on vacuum phase. That is, we
> could skip different indexes at different vacuum phase. Therefore with
> your idea, we would need to have at least three linked lists for each
> possible vacuum phase(bulkdelete, conditional cleanup and cleanup), is
> that right?
>
> I think we can check if there are indexes that should be processed by
> the leader process before entering the loop in vacuum_indexes_leader
> by comparing nindexes_parallel_XXX of LVParallelState to the number of
> indexes but I'm not sure it's effective since the number of indexes on
> a table should be small.
>

Hi,

+/*
+ * Try to initialize the parallel vacuum if requested
+ */
+if (params->nworkers >= 0 && vacrelstats->useindex)
+{
+/*
+ * Since parallel workers cannot access data in temporary tables, we
+ * can't perform parallel vacuum on them.
+ */
+if (RelationUsesLocalBuffers(onerel))
+{
+/*
+ * Give warning only if the user explicitly tries to perform a
+ * parallel vacuum on the temporary table.
+ */
+if (params->nworkers > 0)
+ereport(WARNING,
+(errmsg("disabling parallel option of vacuum
on \"%s\" --- cannot vacuum temporary tables in parallel",

>From v45 patch, we moved warning of temporary table into
"params->nworkers >= 0 && vacrelstats->useindex)" check so if table
don't have any index, then we are not giving any warning. I think, we
should give warning for all the temporary tables if parallel degree is
given. (Till v44 patch, we were giving warning for all the temporary
tables(having index and without index))

Thoughts?

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-13 Thread Masahiko Sawada
On Tue, 14 Jan 2020 at 03:20, Mahendra Singh Thalor  wrote:
>
> On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> >
> > Hi
> > Thank you for update! I looked again
> >
> > (vacuum_indexes_leader)
> > +   /* Skip the indexes that can be processed by parallel 
> > workers */
> > +   if (!skip_index)
> > +   continue;
> >
> > Does the variable name skip_index not confuse here? Maybe rename to 
> > something like can_parallel?
> >
>
> Again I looked into code and thought that somehow if we can add a
> boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
> identify that this index is supporting parallel vacuum or not, then it
> will be easy to skip those indexes and multiple time we will not call
> skip_parallel_vacuum_index (from vacuum_indexes_leader and
> parallel_vacuum_index)
> We can have a linked list of non-parallel supported indexes, then
> directly we can pass to vacuum_indexes_leader.
>
> Ex: let suppose we have 5 indexes into a table.  If before launching
> parallel workers, if we can add boolean flag(can_parallel)
> IndexBulkDeleteResult structure to identify that this index is
> supporting parallel vacuum or not.
> Let index 1, 4 are not supporting parallel vacuum so we already have
> info in a linked list that 1->4 are not supporting parallel vacuum, so
> parallel_vacuum_index will process these indexes and rest will be
> processed by parallel workers. If parallel worker found that
> can_parallel is false, then it will skip that index.
>
> As per my understanding, if we implement this, then we can avoid
> multiple function calling of skip_parallel_vacuum_index and if there
> is no index which can't  performe parallel vacuum, then we will not
> call vacuum_indexes_leader as head of list pointing to null. (we can
> save unnecessary calling of vacuum_indexes_leader)
>
> Thoughts?
>

We skip not only indexes that don't support parallel index vacuum but
also indexes supporting it depending on vacuum phase. That is, we
could skip different indexes at different vacuum phase. Therefore with
your idea, we would need to have at least three linked lists for each
possible vacuum phase(bulkdelete, conditional cleanup and cleanup), is
that right?

I think we can check if there are indexes that should be processed by
the leader process before entering the loop in vacuum_indexes_leader
by comparing nindexes_parallel_XXX of LVParallelState to the number of
indexes but I'm not sure it's effective since the number of indexes on
a table should be small.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-13 Thread Masahiko Sawada
On Mon, 13 Jan 2020 at 12:50, Amit Kapila  wrote:
>
> On Sat, Jan 11, 2020 at 7:48 PM Masahiko Sawada
>  wrote:
> >
> > On Sat, 11 Jan 2020 at 13:18, Amit Kapila  wrote:
> > >
> > > On Sat, Jan 11, 2020 at 9:23 AM Masahiko Sawada
> > >  wrote:
> > > >
> > > > On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor 
> > > >  wrote:
> > > > >
> > > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > > >
> > > > > > Hi
> > > > > > Thank you for update! I looked again
> > > > > >
> > > > > > (vacuum_indexes_leader)
> > > > > > +   /* Skip the indexes that can be processed by 
> > > > > > parallel workers */
> > > > > > +   if (!skip_index)
> > > > > > +   continue;
> > > > > >
> > > > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > > > something like can_parallel?
> > > > >
> > > > > I also agree with your point.
> > > >
> > > > I don't think the change is a good idea.
> > > >
> > > > -   boolskip_index = 
> > > > (get_indstats(lps->lvshared, i) == NULL ||
> > > > - 
> > > > skip_parallel_vacuum_index(Irel[i], lps->lvshared));
> > > > +   boolcan_parallel = 
> > > > (get_indstats(lps->lvshared, i) == NULL ||
> > > > +   
> > > > skip_parallel_vacuum_index(Irel[i],
> > > > +   
> > > >lps->lvshared));
> > > >
> > > > The above condition is true when the index can *not* do parallel index 
> > > > vacuum. How about changing it to skipped_index and change the comment 
> > > > to something like “We are interested in only index skipped parallel 
> > > > vacuum”?
> > > >
> > >
> > > Hmm, I find the current code and comment better than what you or
> > > Sergei are proposing.  I am not sure what is the point of confusion in
> > > the current code?
> >
> > Yeah the current code is also good. I just thought they were concerned
> > that the variable name skip_index might be confusing because we skip
> > if skip_index is NOT true.
> >
>
> Okay, would it better if we get rid of this variable and have code like below?
>
> /* Skip the indexes that can be processed by parallel workers */
> if ( !(get_indstats(lps->lvshared, i) == NULL ||
> skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
> continue;

Make sense to me.

> ...
>
> > >
> > > > >
> > > > > >
> > > > > > Another question about behavior on temporary tables. Use case: the 
> > > > > > user commands just "vacuum;" to vacuum entire database (and has 
> > > > > > enough maintenance workers). Vacuum starts fine in parallel, but on 
> > > > > > first temporary table we hit:
> > > > > >
> > > > > > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 
> > > > > > 0)
> > > > > > +   {
> > > > > > +   ereport(WARNING,
> > > > > > +   (errmsg("disabling parallel option 
> > > > > > of vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > > > > > +   
> > > > > > RelationGetRelationName(onerel;
> > > > > > +   params->nworkers = -1;
> > > > > > +   }
> > > > > >
> > > > > > And therefore we turn off the parallel vacuum for the remaining 
> > > > > > tables... Can we improve this case?
> > > > >
> > > > > Good point.
> > > > > Yes, we should improve this. I tried to fix this.
> > > >
> > > > +1
> > > >
> > >
> > > Yeah, we can improve the situation here.  I think we don't need to
> > > change the value of params->nworkers at first place if allow
> > > lazy_scan_heap to take care of this.  Also, I think we shouldn't
> > > display warning unless the user has explicitly asked for parallel
> > > option.  See the fix in the attached patch.
> >
> > Agreed. But with the updated patch the PARALLEL option without the
> > parallel degree doesn't display warning because params->nworkers = 0
> > in that case. So how about restoring params->nworkers at the end of
> > vacuum_rel()?
> >
>
> I had also thought on those lines, but I was not entirely sure about
> this resetting of workers.  Today, again thinking about it, it seems
> the idea Mahendra is suggesting that is giving an error if the
> parallel degree is not specified seems reasonable to me.  This means
> Vacuum (parallel), Vacuum (parallel) , etc. will give an
> error "parallel degree must be specified".  This idea has merit as now
> we are supporting a parallel vacuum by default, so a 'parallel' option
> without a parallel degree doesn't have any meaning.  If we do that,
> then we don't need to do anything additional about the handling of
> temp tables (other than what patch is already doing) as well.  What do
> you think?
>

Good point! Agreed.

Regards,

--
Masahiko Sawada

Re: [HACKERS] Block level parallel vacuum

2020-01-13 Thread Mahendra Singh Thalor
On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
>
> Hi
> Thank you for update! I looked again
>
> (vacuum_indexes_leader)
> +   /* Skip the indexes that can be processed by parallel workers 
> */
> +   if (!skip_index)
> +   continue;
>
> Does the variable name skip_index not confuse here? Maybe rename to something 
> like can_parallel?
>

Again I looked into code and thought that somehow if we can add a
boolean flag(can_parallel)  in IndexBulkDeleteResult structure to
identify that this index is supporting parallel vacuum or not, then it
will be easy to skip those indexes and multiple time we will not call
skip_parallel_vacuum_index (from vacuum_indexes_leader and
parallel_vacuum_index)
We can have a linked list of non-parallel supported indexes, then
directly we can pass to vacuum_indexes_leader.

Ex: let suppose we have 5 indexes into a table.  If before launching
parallel workers, if we can add boolean flag(can_parallel)
IndexBulkDeleteResult structure to identify that this index is
supporting parallel vacuum or not.
Let index 1, 4 are not supporting parallel vacuum so we already have
info in a linked list that 1->4 are not supporting parallel vacuum, so
parallel_vacuum_index will process these indexes and rest will be
processed by parallel workers. If parallel worker found that
can_parallel is false, then it will skip that index.

As per my understanding, if we implement this, then we can avoid
multiple function calling of skip_parallel_vacuum_index and if there
is no index which can't  performe parallel vacuum, then we will not
call vacuum_indexes_leader as head of list pointing to null. (we can
save unnecessary calling of vacuum_indexes_leader)

Thoughts?

--
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-13 Thread Amit Kapila
On Thu, Jan 9, 2020 at 4:03 PM Amit Kapila  wrote:
>
> On Thu, Jan 9, 2020 at 10:41 AM Masahiko Sawada
>  wrote:
> >
> > On Wed, 8 Jan 2020 at 22:16, Amit Kapila  wrote:
> > >
> > >
> > > What do you think of the attached?  Sawada-san, kindly verify the
> > > changes and let me know your opinion.
> >
> > I agreed to not include both the FAST option patch and
> > DISABLE_LEADER_PARTICIPATION patch at this stage. It's better to focus
> > on the main part and we can discuss and add them later if want.
> >
> > I've looked at the latest version patch you shared. Overall it looks
> > good and works fine. I have a few small comments:
> >
>
> I have addressed all your comments and slightly change nearby comments
> and ran pgindent.  I think we can commit the first two preparatory
> patches now unless you or someone else has any more comments on those.
>

I have pushed the first one (4e514c6) and I am planning to commit the
next one (API: v46-0001-Introduce-IndexAM-fields-for-parallel-vacuum)
patch on Wednesday.  We are still discussing a few things for the main
parallel vacuum patch
(v46-0002-Allow-vacuum-command-to-process-indexes-in-parallel) which
we should reach conclusion soon. In the attached, I have made a few
changes in the comments of patch
v46-0002-Allow-vacuum-command-to-process-indexes-in-parallel.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


v46-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
Description: Binary data


v46-0002-Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-13 Thread Sergei Kornilov
Hello

> I just thought they were concerned
> that the variable name skip_index might be confusing because we skip
> if skip_index is NOT true.

Right.

>>  > - bool skip_index = (get_indstats(lps->lvshared, i) == NULL ||
>>  > - skip_parallel_vacuum_index(Irel[i], lps->lvshared));
>>  > + bool can_parallel = (get_indstats(lps->lvshared, i) == NULL ||
>>  > + skip_parallel_vacuum_index(Irel[i],
>>  > + lps->lvshared));
>>  >
>>  > The above condition is true when the index can *not* do parallel index 
>> vacuum.

Ouch, right. I was wrong. (or the variable name and the comment really confused 
me)

> Okay, would it better if we get rid of this variable and have code like below?
>
> /* Skip the indexes that can be processed by parallel workers */
> if ( !(get_indstats(lps->lvshared, i) == NULL ||
> skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
> continue;

Complex condition... Not sure.

> How about changing it to skipped_index and change the comment to something 
> like “We are interested in only index skipped parallel vacuum”?

I prefer this idea.

> Today, again thinking about it, it seems
> the idea Mahendra is suggesting that is giving an error if the
> parallel degree is not specified seems reasonable to me.

+1

regards, Sergei




Re: [HACKERS] Block level parallel vacuum

2020-01-12 Thread Dilip Kumar
On Mon, Jan 13, 2020 at 9:20 AM Amit Kapila  wrote:
>
> On Sat, Jan 11, 2020 at 7:48 PM Masahiko Sawada
>  wrote:
> >
> > On Sat, 11 Jan 2020 at 13:18, Amit Kapila  wrote:
> > >
> > > On Sat, Jan 11, 2020 at 9:23 AM Masahiko Sawada
> > >  wrote:
> > > >
> > > > On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor 
> > > >  wrote:
> > > > >
> > > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > > >
> > > > > > Hi
> > > > > > Thank you for update! I looked again
> > > > > >
> > > > > > (vacuum_indexes_leader)
> > > > > > +   /* Skip the indexes that can be processed by 
> > > > > > parallel workers */
> > > > > > +   if (!skip_index)
> > > > > > +   continue;
> > > > > >
> > > > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > > > something like can_parallel?
> > > > >
> > > > > I also agree with your point.
> > > >
> > > > I don't think the change is a good idea.
> > > >
> > > > -   boolskip_index = 
> > > > (get_indstats(lps->lvshared, i) == NULL ||
> > > > - 
> > > > skip_parallel_vacuum_index(Irel[i], lps->lvshared));
> > > > +   boolcan_parallel = 
> > > > (get_indstats(lps->lvshared, i) == NULL ||
> > > > +   
> > > > skip_parallel_vacuum_index(Irel[i],
> > > > +   
> > > >lps->lvshared));
> > > >
> > > > The above condition is true when the index can *not* do parallel index 
> > > > vacuum. How about changing it to skipped_index and change the comment 
> > > > to something like “We are interested in only index skipped parallel 
> > > > vacuum”?
> > > >
> > >
> > > Hmm, I find the current code and comment better than what you or
> > > Sergei are proposing.  I am not sure what is the point of confusion in
> > > the current code?
> >
> > Yeah the current code is also good. I just thought they were concerned
> > that the variable name skip_index might be confusing because we skip
> > if skip_index is NOT true.
> >
>
> Okay, would it better if we get rid of this variable and have code like below?
>
> /* Skip the indexes that can be processed by parallel workers */
> if ( !(get_indstats(lps->lvshared, i) == NULL ||
> skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
> continue;
> ...
>
> > >
> > > > >
> > > > > >
> > > > > > Another question about behavior on temporary tables. Use case: the 
> > > > > > user commands just "vacuum;" to vacuum entire database (and has 
> > > > > > enough maintenance workers). Vacuum starts fine in parallel, but on 
> > > > > > first temporary table we hit:
> > > > > >
> > > > > > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 
> > > > > > 0)
> > > > > > +   {
> > > > > > +   ereport(WARNING,
> > > > > > +   (errmsg("disabling parallel option 
> > > > > > of vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > > > > > +   
> > > > > > RelationGetRelationName(onerel;
> > > > > > +   params->nworkers = -1;
> > > > > > +   }
> > > > > >
> > > > > > And therefore we turn off the parallel vacuum for the remaining 
> > > > > > tables... Can we improve this case?
> > > > >
> > > > > Good point.
> > > > > Yes, we should improve this. I tried to fix this.
> > > >
> > > > +1
> > > >
> > >
> > > Yeah, we can improve the situation here.  I think we don't need to
> > > change the value of params->nworkers at first place if allow
> > > lazy_scan_heap to take care of this.  Also, I think we shouldn't
> > > display warning unless the user has explicitly asked for parallel
> > > option.  See the fix in the attached patch.
> >
> > Agreed. But with the updated patch the PARALLEL option without the
> > parallel degree doesn't display warning because params->nworkers = 0
> > in that case. So how about restoring params->nworkers at the end of
> > vacuum_rel()?
> >
>
> I had also thought on those lines, but I was not entirely sure about
> this resetting of workers.  Today, again thinking about it, it seems
> the idea Mahendra is suggesting that is giving an error if the
> parallel degree is not specified seems reasonable to me.  This means
> Vacuum (parallel), Vacuum (parallel) , etc. will give an
> error "parallel degree must be specified".  This idea has merit as now
> we are supporting a parallel vacuum by default, so a 'parallel' option
> without a parallel degree doesn't have any meaning.  If we do that,
> then we don't need to do anything additional about the handling of
> temp tables (other than what patch is already doing) as well.  What do
> you think?
>
This idea make sense to me.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-12 Thread Amit Kapila
On Sat, Jan 11, 2020 at 7:48 PM Masahiko Sawada
 wrote:
>
> On Sat, 11 Jan 2020 at 13:18, Amit Kapila  wrote:
> >
> > On Sat, Jan 11, 2020 at 9:23 AM Masahiko Sawada
> >  wrote:
> > >
> > > On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor  
> > > wrote:
> > > >
> > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > >
> > > > > Hi
> > > > > Thank you for update! I looked again
> > > > >
> > > > > (vacuum_indexes_leader)
> > > > > +   /* Skip the indexes that can be processed by parallel 
> > > > > workers */
> > > > > +   if (!skip_index)
> > > > > +   continue;
> > > > >
> > > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > > something like can_parallel?
> > > >
> > > > I also agree with your point.
> > >
> > > I don't think the change is a good idea.
> > >
> > > -   boolskip_index = (get_indstats(lps->lvshared, 
> > > i) == NULL ||
> > > - 
> > > skip_parallel_vacuum_index(Irel[i], lps->lvshared));
> > > +   boolcan_parallel = 
> > > (get_indstats(lps->lvshared, i) == NULL ||
> > > +   
> > > skip_parallel_vacuum_index(Irel[i],
> > > + 
> > >  lps->lvshared));
> > >
> > > The above condition is true when the index can *not* do parallel index 
> > > vacuum. How about changing it to skipped_index and change the comment to 
> > > something like “We are interested in only index skipped parallel vacuum”?
> > >
> >
> > Hmm, I find the current code and comment better than what you or
> > Sergei are proposing.  I am not sure what is the point of confusion in
> > the current code?
>
> Yeah the current code is also good. I just thought they were concerned
> that the variable name skip_index might be confusing because we skip
> if skip_index is NOT true.
>

Okay, would it better if we get rid of this variable and have code like below?

/* Skip the indexes that can be processed by parallel workers */
if ( !(get_indstats(lps->lvshared, i) == NULL ||
skip_parallel_vacuum_index(Irel[i], lps->lvshared)))
continue;
...

> >
> > > >
> > > > >
> > > > > Another question about behavior on temporary tables. Use case: the 
> > > > > user commands just "vacuum;" to vacuum entire database (and has 
> > > > > enough maintenance workers). Vacuum starts fine in parallel, but on 
> > > > > first temporary table we hit:
> > > > >
> > > > > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 0)
> > > > > +   {
> > > > > +   ereport(WARNING,
> > > > > +   (errmsg("disabling parallel option of 
> > > > > vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > > > > +   
> > > > > RelationGetRelationName(onerel;
> > > > > +   params->nworkers = -1;
> > > > > +   }
> > > > >
> > > > > And therefore we turn off the parallel vacuum for the remaining 
> > > > > tables... Can we improve this case?
> > > >
> > > > Good point.
> > > > Yes, we should improve this. I tried to fix this.
> > >
> > > +1
> > >
> >
> > Yeah, we can improve the situation here.  I think we don't need to
> > change the value of params->nworkers at first place if allow
> > lazy_scan_heap to take care of this.  Also, I think we shouldn't
> > display warning unless the user has explicitly asked for parallel
> > option.  See the fix in the attached patch.
>
> Agreed. But with the updated patch the PARALLEL option without the
> parallel degree doesn't display warning because params->nworkers = 0
> in that case. So how about restoring params->nworkers at the end of
> vacuum_rel()?
>

I had also thought on those lines, but I was not entirely sure about
this resetting of workers.  Today, again thinking about it, it seems
the idea Mahendra is suggesting that is giving an error if the
parallel degree is not specified seems reasonable to me.  This means
Vacuum (parallel), Vacuum (parallel) , etc. will give an
error "parallel degree must be specified".  This idea has merit as now
we are supporting a parallel vacuum by default, so a 'parallel' option
without a parallel degree doesn't have any meaning.  If we do that,
then we don't need to do anything additional about the handling of
temp tables (other than what patch is already doing) as well.  What do
you think?



-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-11 Thread Mahendra Singh Thalor
On Sat, 11 Jan 2020 at 19:48, Masahiko Sawada <
masahiko.saw...@2ndquadrant.com> wrote:
>
> On Sat, 11 Jan 2020 at 13:18, Amit Kapila  wrote:
> >
> > On Sat, Jan 11, 2020 at 9:23 AM Masahiko Sawada
> >  wrote:
> > >
> > > On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor <
mahi6...@gmail.com> wrote:
> > > >
> > > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > > >
> > > > > Hi
> > > > > Thank you for update! I looked again
> > > > >
> > > > > (vacuum_indexes_leader)
> > > > > +   /* Skip the indexes that can be processed by
parallel workers */
> > > > > +   if (!skip_index)
> > > > > +   continue;
> > > > >
> > > > > Does the variable name skip_index not confuse here? Maybe rename
to something like can_parallel?
> > > >
> > > > I also agree with your point.
> > >
> > > I don't think the change is a good idea.
> > >
> > > -   boolskip_index =
(get_indstats(lps->lvshared, i) == NULL ||
> > > -
skip_parallel_vacuum_index(Irel[i], lps->lvshared));
> > > +   boolcan_parallel =
(get_indstats(lps->lvshared, i) == NULL ||
> > > +
  skip_parallel_vacuum_index(Irel[i],
> > > +
 lps->lvshared));
> > >
> > > The above condition is true when the index can *not* do parallel
index vacuum. How about changing it to skipped_index and change the comment
to something like “We are interested in only index skipped parallel vacuum”?
> > >
> >
> > Hmm, I find the current code and comment better than what you or
> > Sergei are proposing.  I am not sure what is the point of confusion in
> > the current code?
>
> Yeah the current code is also good. I just thought they were concerned
> that the variable name skip_index might be confusing because we skip
> if skip_index is NOT true.
>
> >
> > > >
> > > > >
> > > > > Another question about behavior on temporary tables. Use case:
the user commands just "vacuum;" to vacuum entire database (and has enough
maintenance workers). Vacuum starts fine in parallel, but on first
temporary table we hit:
> > > > >
> > > > > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers
>= 0)
> > > > > +   {
> > > > > +   ereport(WARNING,
> > > > > +   (errmsg("disabling parallel
option of vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > > > > +
RelationGetRelationName(onerel;
> > > > > +   params->nworkers = -1;
> > > > > +   }
> > > > >
> > > > > And therefore we turn off the parallel vacuum for the remaining
tables... Can we improve this case?
> > > >
> > > > Good point.
> > > > Yes, we should improve this. I tried to fix this.
> > >
> > > +1
> > >
> >
> > Yeah, we can improve the situation here.  I think we don't need to
> > change the value of params->nworkers at first place if allow
> > lazy_scan_heap to take care of this.  Also, I think we shouldn't
> > display warning unless the user has explicitly asked for parallel
> > option.  See the fix in the attached patch.
>
> Agreed. But with the updated patch the PARALLEL option without the
> parallel degree doesn't display warning because params->nworkers = 0
> in that case. So how about restoring params->nworkers at the end of
> vacuum_rel()?
>
> +   /*
> +* Give warning only if the user explicitly
> tries to perform a
> +* parallel vacuum on the temporary table.
> +*/
> +   if (params->nworkers > 0)
> +   ereport(WARNING,
> +   (errmsg("disabling
> parallel option of vacuum on \"%s\" --- cannot vacuum temporary tables
> in parallel",
> +
> RelationGetRelationName(onerel;

Hi,
I have some doubts for warning of temporary tables . Below are the some
examples.

Let we have 1 temporary table with name "temp_table".
*Case 1:*
vacuum;
I think, in this case, we should not give any warning for temp table. We
should do parallel vacuum(considering zero as parallel degree) for all the
tables except temporary tables.

*Case 2:*
vacuum (parallel);

*Case 3:*
vacuum(parallel 5);

*Case 4*:
vacuum(parallel) temp_table;

*Case 5:*
vacuum(parallel 4) temp_table;

I think, for case 2 and 4, as per new design, we should give error (ERROR:
Parallel degree should be specified between 0 to 1024) because by default,
parallel vacuum is ON, so if user give parallel option without degree, then
we can give error.
If we can give error for case 2 and 4, then we can give warning for case 3,
5.

Thoughts?

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2020-01-11 Thread Masahiko Sawada
On Sat, 11 Jan 2020 at 13:18, Amit Kapila  wrote:
>
> On Sat, Jan 11, 2020 at 9:23 AM Masahiko Sawada
>  wrote:
> >
> > On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor  
> > wrote:
> > >
> > > On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> > > >
> > > > Hi
> > > > Thank you for update! I looked again
> > > >
> > > > (vacuum_indexes_leader)
> > > > +   /* Skip the indexes that can be processed by parallel 
> > > > workers */
> > > > +   if (!skip_index)
> > > > +   continue;
> > > >
> > > > Does the variable name skip_index not confuse here? Maybe rename to 
> > > > something like can_parallel?
> > >
> > > I also agree with your point.
> >
> > I don't think the change is a good idea.
> >
> > -   boolskip_index = (get_indstats(lps->lvshared, 
> > i) == NULL ||
> > - 
> > skip_parallel_vacuum_index(Irel[i], lps->lvshared));
> > +   boolcan_parallel = (get_indstats(lps->lvshared, 
> > i) == NULL ||
> > +   
> > skip_parallel_vacuum_index(Irel[i],
> > +   
> >lps->lvshared));
> >
> > The above condition is true when the index can *not* do parallel index 
> > vacuum. How about changing it to skipped_index and change the comment to 
> > something like “We are interested in only index skipped parallel vacuum”?
> >
>
> Hmm, I find the current code and comment better than what you or
> Sergei are proposing.  I am not sure what is the point of confusion in
> the current code?

Yeah the current code is also good. I just thought they were concerned
that the variable name skip_index might be confusing because we skip
if skip_index is NOT true.

>
> > >
> > > >
> > > > Another question about behavior on temporary tables. Use case: the user 
> > > > commands just "vacuum;" to vacuum entire database (and has enough 
> > > > maintenance workers). Vacuum starts fine in parallel, but on first 
> > > > temporary table we hit:
> > > >
> > > > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 0)
> > > > +   {
> > > > +   ereport(WARNING,
> > > > +   (errmsg("disabling parallel option of 
> > > > vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > > > +   
> > > > RelationGetRelationName(onerel;
> > > > +   params->nworkers = -1;
> > > > +   }
> > > >
> > > > And therefore we turn off the parallel vacuum for the remaining 
> > > > tables... Can we improve this case?
> > >
> > > Good point.
> > > Yes, we should improve this. I tried to fix this.
> >
> > +1
> >
>
> Yeah, we can improve the situation here.  I think we don't need to
> change the value of params->nworkers at first place if allow
> lazy_scan_heap to take care of this.  Also, I think we shouldn't
> display warning unless the user has explicitly asked for parallel
> option.  See the fix in the attached patch.

Agreed. But with the updated patch the PARALLEL option without the
parallel degree doesn't display warning because params->nworkers = 0
in that case. So how about restoring params->nworkers at the end of
vacuum_rel()?

+   /*
+* Give warning only if the user explicitly
tries to perform a
+* parallel vacuum on the temporary table.
+*/
+   if (params->nworkers > 0)
+   ereport(WARNING,
+   (errmsg("disabling
parallel option of vacuum on \"%s\" --- cannot vacuum temporary tables
in parallel",
+
RelationGetRelationName(onerel;

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-10 Thread Masahiko Sawada
On Fri, 10 Jan 2020 at 20:54, Mahendra Singh Thalor 
wrote:
>
> On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
> >
> > Hi
> > Thank you for update! I looked again
> >
> > (vacuum_indexes_leader)
> > +   /* Skip the indexes that can be processed by parallel
workers */
> > +   if (!skip_index)
> > +   continue;
> >
> > Does the variable name skip_index not confuse here? Maybe rename to
something like can_parallel?
>
> I also agree with your point.

I don't think the change is a good idea.

-   boolskip_index = (get_indstats(lps->lvshared,
i) == NULL ||
-
 skip_parallel_vacuum_index(Irel[i], lps->lvshared));
+   boolcan_parallel = (get_indstats(lps->lvshared,
i) == NULL ||
+
 skip_parallel_vacuum_index(Irel[i],
+
lps->lvshared));

The above condition is true when the index can *not* do parallel index
vacuum. How about changing it to skipped_index and change the comment to
something like “We are interested in only index skipped parallel vacuum”?

>
> >
> > Another question about behavior on temporary tables. Use case: the user
commands just "vacuum;" to vacuum entire database (and has enough
maintenance workers). Vacuum starts fine in parallel, but on first
temporary table we hit:
> >
> > +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 0)
> > +   {
> > +   ereport(WARNING,
> > +   (errmsg("disabling parallel option of
vacuum on \"%s\" --- cannot vacuum temporary tables in parallel",
> > +
 RelationGetRelationName(onerel;
> > +   params->nworkers = -1;
> > +   }
> >
> > And therefore we turn off the parallel vacuum for the remaining
tables... Can we improve this case?
>
> Good point.
> Yes, we should improve this. I tried to fix this.

+1

Regards,


-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Block level parallel vacuum

2020-01-10 Thread Sergei Kornilov
Hello

> Yes, we should improve this. I tried to fix this. Attaching a delta
> patch that is fixing both the comments.

Thank you, I have no objections.

I think that status of CF entry is outdated and the most appropriate status for 
this patch is "Ready to Commiter". Changed. I also added an annotation with a 
link to recently summarized results.

regards, Sergei




Re: [HACKERS] Block level parallel vacuum

2020-01-10 Thread Mahendra Singh Thalor
On Fri, 10 Jan 2020 at 15:51, Sergei Kornilov  wrote:
>
> Hi
> Thank you for update! I looked again
>
> (vacuum_indexes_leader)
> +   /* Skip the indexes that can be processed by parallel workers 
> */
> +   if (!skip_index)
> +   continue;
>
> Does the variable name skip_index not confuse here? Maybe rename to something 
> like can_parallel?

I also agree with your point.

>
> Another question about behavior on temporary tables. Use case: the user 
> commands just "vacuum;" to vacuum entire database (and has enough maintenance 
> workers). Vacuum starts fine in parallel, but on first temporary table we hit:
>
> +   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 0)
> +   {
> +   ereport(WARNING,
> +   (errmsg("disabling parallel option of vacuum 
> on \"%s\" --- cannot vacuum temporary tables in parallel",
> +   
> RelationGetRelationName(onerel;
> +   params->nworkers = -1;
> +   }
>
> And therefore we turn off the parallel vacuum for the remaining tables... Can 
> we improve this case?

Good point.
Yes, we should improve this. I tried to fix this.  Attaching a delta
patch that is fixing both the comments.

-- 
Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com


v44-0002-delta_Allow-vacuum-command-to-process-indexes-in-parallel.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2020-01-10 Thread Sergei Kornilov
Hi
Thank you for update! I looked again

(vacuum_indexes_leader)
+   /* Skip the indexes that can be processed by parallel workers */
+   if (!skip_index)
+   continue;

Does the variable name skip_index not confuse here? Maybe rename to something 
like can_parallel?

Another question about behavior on temporary tables. Use case: the user 
commands just "vacuum;" to vacuum entire database (and has enough maintenance 
workers). Vacuum starts fine in parallel, but on first temporary table we hit:

+   if (RelationUsesLocalBuffers(onerel) && params->nworkers >= 0)
+   {
+   ereport(WARNING,
+   (errmsg("disabling parallel option of vacuum on 
\"%s\" --- cannot vacuum temporary tables in parallel",
+   
RelationGetRelationName(onerel;
+   params->nworkers = -1;
+   }

And therefore we turn off the parallel vacuum for the remaining tables... Can 
we improve this case?

regards, Sergei




Re: [HACKERS] Block level parallel vacuum

2020-01-09 Thread Masahiko Sawada
On Thu, 9 Jan 2020 at 19:33, Amit Kapila  wrote:
>
> On Thu, Jan 9, 2020 at 10:41 AM Masahiko Sawada
>  wrote:
> >
> > On Wed, 8 Jan 2020 at 22:16, Amit Kapila  wrote:
> > >
> > >
> > > What do you think of the attached?  Sawada-san, kindly verify the
> > > changes and let me know your opinion.
> >
> > I agreed to not include both the FAST option patch and
> > DISABLE_LEADER_PARTICIPATION patch at this stage. It's better to focus
> > on the main part and we can discuss and add them later if want.
> >
> > I've looked at the latest version patch you shared. Overall it looks
> > good and works fine. I have a few small comments:
> >
>
> I have addressed all your comments and slightly change nearby comments
> and ran pgindent.  I think we can commit the first two preparatory
> patches now unless you or someone else has any more comments on those.

Yes.

I'd like to briefly summarize the
v43-0002-Allow-vacuum-command-to-process-indexes-in-parallel for other
reviewers who wants to newly starts to review this patch:

Introduce PARALLEL option to VACUUM command. Parallel vacuum is
enabled by default. The number of parallel workers is determined based
on the number of indexes that support parallel index when user didn't
specify the parallel degree or PARALLEL option is omitted. Specifying
PARALLEL 0 disables parallel vacuum.

In parallel vacuum of this patch, only the leader process does heap
scan and collect dead tuple TIDs on the DSM segment. Before starting
index vacuum or index cleanup the leader launches the parallel workers
and perform it together with parallel workers. Individual index are
processed by one vacuum worker process. Therefore parallel vacuum can
be used when the table has at least 2 indexes (the leader always takes
one index). After launched parallel workers, the leader process
vacuums indexes first that don't support parallel index after launched
parallel workers. The parallel workers process indexes that support
parallel index vacuum and the leader process join as a worker after
completing such indexes. Once all indexes are processed the parallel
worker processes exit.  After that, the leader process re-initializes
the parallel context so that it can use the same DSM for multiple
passes of index vacuum and for performing index cleanup.  For updating
the index statistics, we need to update the system table and since
updates are not allowed during parallel mode we update the index
statistics after exiting from the parallel mode.

When the vacuum cost-based delay is enabled, even parallel vacuum is
throttled. The basic idea of a cost-based vacuum delay for parallel
index vacuuming is to allow all parallel vacuum workers including the
leader process to have a shared view of cost related parameters
(mainly VacuumCostBalance). We allow each worker to update it as and
when it has incurred any cost and then based on that decide whether it
needs to sleep.  We allow the worker to sleep proportional to the work
done and reduce the VacuumSharedCostBalance by the amount which is
consumed by the current worker (VacuumCostBalanceLocal).  This can
avoid letting the workers sleep who have done less or no I/O as
compared to other workers and therefore can ensure that workers who
are doing more I/O got throttled more.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-09 Thread Mahendra Singh Thalor
On Thu, 9 Jan 2020 at 17:31, Sergei Kornilov  wrote:
>
> Hello
>
> I noticed that parallel vacuum uses min_parallel_index_scan_size GUC to skip 
> small indexes but this is not mentioned in documentation for both vacuum 
> command and GUC itself.
>
> +   /* Determine the number of parallel workers to launch */
> +   if (lps->lvshared->for_cleanup)
> +   {
> +   if (lps->lvshared->first_time)
> +   nworkers = lps->nindexes_parallel_cleanup +
> +   lps->nindexes_parallel_condcleanup - 1;
> +   else
> +   nworkers = lps->nindexes_parallel_cleanup - 1;
> +
> +   }
> +   else
> +   nworkers = lps->nindexes_parallel_bulkdel - 1;

v43-0001-Introduce-IndexAM-fields-for-parallel-vacuum and
v43-0001-Introduce-IndexAM-fields-for-parallel-vacuum patches look
fine to me.

Below are some review comments for v43-0002 patch.

1.
+integer
+
+ 
+  Specifies a positive integer value passed to the selected option.
+  The integer value can
+  also be omitted, in which case the value is decided by the command
+  based on the option used.
+ 
+http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-09 Thread Sergei Kornilov
Hello

I noticed that parallel vacuum uses min_parallel_index_scan_size GUC to skip 
small indexes but this is not mentioned in documentation for both vacuum 
command and GUC itself.

+   /* Determine the number of parallel workers to launch */
+   if (lps->lvshared->for_cleanup)
+   {
+   if (lps->lvshared->first_time)
+   nworkers = lps->nindexes_parallel_cleanup +
+   lps->nindexes_parallel_condcleanup - 1;
+   else
+   nworkers = lps->nindexes_parallel_cleanup - 1;
+
+   }
+   else
+   nworkers = lps->nindexes_parallel_bulkdel - 1;

(lazy_parallel_vacuum_indexes)
Perhaps we need to add a comment for future readers, why we reduce the number 
of workers by 1. Maybe this would be cleaner?

+   /* Determine the number of parallel workers to launch */
+   if (lps->lvshared->for_cleanup)
+   {
+   if (lps->lvshared->first_time)
+   nworkers = lps->nindexes_parallel_cleanup +
+   lps->nindexes_parallel_condcleanup;
+   else
+   nworkers = lps->nindexes_parallel_cleanup;
+
+   }
+   else
+   nworkers = lps->nindexes_parallel_bulkdel;
+
+   /* The leader process will participate */
+   nworkers--;

I have no more comments after reading the patches.

regards, Sergei




Re: [HACKERS] Block level parallel vacuum

2020-01-08 Thread Masahiko Sawada
On Wed, 8 Jan 2020 at 22:16, Amit Kapila  wrote:
>
> On Sat, Jan 4, 2020 at 6:48 PM Mahendra Singh Thalor  
> wrote:
> >
> > Hi All,
> >
> > In other thread "parallel vacuum options/syntax" [1], Amit Kapila asked 
> > opinion about syntax for making normal vacuum to parallel.  From that 
> > thread, I can see that people are in favor of option(b) to implement.  So I 
> > tried to implement option(b) on the top of v41 patch set and implemented a 
> > delta patch.
> >
>
> I looked at your code and changed it slightly to allow the vacuum to
> be performed in parallel by default.  Apart from that, I have made a
> few other modifications (a) changed the macro SizeOfLVDeadTuples as
> preferred by Tomas [1], (b) updated the documentation, (c) changed a
> few comments.

Thanks.

>
> The first two patches are the same.  I have not posted the patch
> related to the FAST option as I am not sure we have a consensus for
> that and I have also intentionally left DISABLE_LEADER_PARTICIPATION
> related patch to avoid confusion.
>
> What do you think of the attached?  Sawada-san, kindly verify the
> changes and let me know your opinion.

I agreed to not include both the FAST option patch and
DISABLE_LEADER_PARTICIPATION patch at this stage. It's better to focus
on the main part and we can discuss and add them later if want.

I've looked at the latest version patch you shared. Overall it looks
good and works fine. I have a few small comments:

1.
+  refer to ).  If the
+  PARALLELoption or parallel degree

A space is needed between  and 'option'.

2.
+   /*
+* Variables to control parallel index vacuuming.  We have a bitmap to
+* indicate which index has stats in shared memory.  The set bit in the
+* map indicates that the particular index supports a parallel vacuum.
+*/
+   pg_atomic_uint32 idx;   /* counter for vacuuming and clean up */
+   pg_atomic_uint32 nprocessed;/* # of indexes done during parallel
+
  * execution */
+   uint32  offset; /* sizeof header incl. bitmap */
+   bits8   bitmap[FLEXIBLE_ARRAY_MEMBER];  /* bit map of NULLs */
+
+   /* Shared index statistics data follows at end of struct */
+} LVShared;

It seems to me that we no longer use nprocessed at all. So we can remove it.

3.
+ * Compute the number of parallel worker processes to request.  Both index
+ * vacuuming and index cleanup can be executed with parallel workers.  The
+ * relation sizes of table don't affect to the parallel degree for now.

I think the last sentence should be "The relation size of table
doesn't affect to the parallel degree for now".

4.
+   /* cap by max_parallel_maintenance_workers */
+   parallel_workers = Min(parallel_workers,
max_parallel_maintenance_workers);

+   /*
+* a parallel vacuum must be requested and there must be indexes on the
+* relation
+*/

+   /* copy the updated statistics */

+   /* parallel vacuum must be active */
+   Assert(VacuumSharedCostBalance);

All comments that the patches newly added except for the above four
places start with a capital letter. Maybe we can change them for
consistency.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2020-01-03 Thread Amit Kapila
On Fri, Jan 3, 2020 at 10:15 PM Robert Haas  wrote:
>
> On Sun, Dec 29, 2019 at 4:23 PM Tomas Vondra
>  wrote:
> > IMO there's not much reason for the leader not to participate. For
> > regular queries the leader may be doing useful stuff (essentially
> > running the non-parallel part of the query) but AFAIK for VAUCUM that's
> > not the case and the worker is not doing anything.
>
> I agree, and said the same thing in
> http://postgr.es/m/CA+Tgmob7JLrngeHz6i60_TqdvE1YBcvGYVoEQ6_xvP=vn7d...@mail.gmail.com
>
> I really don't know why we have that code.
>

We have removed that code from the main patch.  It is in a separate
patch and used mainly for development testing where we want to
debug/test the worker code.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-03 Thread Robert Haas
On Sun, Dec 29, 2019 at 4:23 PM Tomas Vondra
 wrote:
> IMO there's not much reason for the leader not to participate. For
> regular queries the leader may be doing useful stuff (essentially
> running the non-parallel part of the query) but AFAIK for VAUCUM that's
> not the case and the worker is not doing anything.

I agree, and said the same thing in
http://postgr.es/m/CA+Tgmob7JLrngeHz6i60_TqdvE1YBcvGYVoEQ6_xvP=vn7d...@mail.gmail.com

I really don't know why we have that code.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: [HACKERS] Block level parallel vacuum

2020-01-02 Thread Dilip Kumar
On Thu, Jan 2, 2020 at 9:03 AM Amit Kapila  wrote:
>
> On Thu, Jan 2, 2020 at 8:29 AM Masahiko Sawada
>  wrote:
> >
> > On Tue, 31 Dec 2019 at 12:39, Amit Kapila  wrote:
> > >
> > > On Mon, Dec 30, 2019 at 6:46 PM Tomas Vondra
> > >  wrote:
> > > >
> > > > On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:
> > > > >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > > > > wrote:
> > > > >> I think there's another question we need to ask - why to we 
> > > > >> introduce a
> > > > >> bitmask, instead of using regular boolean struct members? Until now, 
> > > > >> the
> > > > >> IndexAmRoutine struct had simple boolean members describing 
> > > > >> capabilities
> > > > >> of the AM implementation. Why shouldn't this patch do the same thing,
> > > > >> i.e. add one boolean flag for each AM feature?
> > > > >>
> > > > >
> > > > >This structure member describes mostly one property of index which is
> > > > >about a parallel vacuum which I am not sure is true for other members.
> > > > >Now, we can use separate bool variables for it which we were initially
> > > > >using in the patch but that seems to be taking more space in a
> > > > >structure without any advantage.  Also, using one variable makes a
> > > > >code bit better because otherwise, in many places we need to check and
> > > > >set four variables instead of one.  This is also the reason we used
> > > > >parallel in its name (we also use *parallel* for parallel index scan
> > > > >related things).  Having said that, we can remove parallel from its
> > > > >name if we want to extend/use it for something other than a parallel
> > > > >vacuum.  I think we might need to add a flag or two for parallelizing
> > > > >heap scan of vacuum when we enhance this feature, so keeping it for
> > > > >just a parallel vacuum is not completely insane.
> > > > >
> > > > >I think keeping amusemaintenanceworkmem separate from this variable
> > > > >seems to me like a better idea as it doesn't describe whether IndexAM
> > > > >can participate in a parallel vacuum or not.  You can see more
> > > > >discussion about that variable in the thread [1].
> > > > >
> > > >
> > > > I don't know, but IMHO it's somewhat easier to work with separate flags.
> > > > Bitmasks make sense when space usage matters a lot, e.g. for on-disk
> > > > representation, but that doesn't seem to be the case here I think (if it
> > > > was, we'd probably use bitmasks already).
> > > >
> > > > It seems like we're mixing two ways to design the struct unnecessarily,
> > > > but I'm not going to nag about this any further.
> > > >
> > >
> > > Fair enough.  I see your point and as mentioned earlier that we
> > > started with the approach of separate booleans, but later found that
> > > this is a better way as it was easier to set and check the different
> > > parallel options for a parallel vacuum.   I think we can go back to
> > > the individual booleans if we want but I am not sure if that is a
> > > better approach for this usage.  Sawada-San, others, do you have any
> > > opinion here?
> >
> > If we go back to the individual booleans we would end up with having
> > three booleans: bulkdelete, cleanup and conditional cleanup. I think
> > making the bulkdelete option to a boolean makes sense but having two
> > booleans for cleanup and conditional cleanup might be slightly odd
> > because these options are exclusive.
> >
>
> If we have only three booleans, then we need to check for all three to
> conclude that a parallel vacuum is not enabled for any index.
> Alternatively, we can have a fourth boolean to indicate that a
> parallel vacuum is not enabled.  And in the future, when we allow
> supporting multiple workers for an index, we might need another
> variable unless we can allow it for all types of indexes.  This was my
> point that having multiple variables for the purpose of a parallel
> vacuum (for indexes) doesn't sound like a better approach than having
> a single uint8 variable.
>
> > If we don't stick to have only
> > booleans the having a ternary value for cleanup might be
> > understandable but I'm not sure it's better to have it for only vacuum
> > purpose.
> >
>
> If we want to keep the possibility of extending it for other purposes,
> then we can probably rename it to amoptions or something like that.
> What do you think?
I think it makes more sense to just keep it for the purpose of
enabling/disabling parallelism in different phases.  I am not sure
that adding more options (which are not related to enabling
parallelism in vacuum phases) to the same variable will make sense.
So I think the current name is good for its purpose.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-02 Thread Dilip Kumar
On Tue, Dec 31, 2019 at 9:09 AM Amit Kapila  wrote:
>
> On Mon, Dec 30, 2019 at 6:46 PM Tomas Vondra
>  wrote:
> >
> > On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:
> > >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > > wrote:
> > >> I think there's another question we need to ask - why to we introduce a
> > >> bitmask, instead of using regular boolean struct members? Until now, the
> > >> IndexAmRoutine struct had simple boolean members describing capabilities
> > >> of the AM implementation. Why shouldn't this patch do the same thing,
> > >> i.e. add one boolean flag for each AM feature?
> > >>
> > >
> > >This structure member describes mostly one property of index which is
> > >about a parallel vacuum which I am not sure is true for other members.
> > >Now, we can use separate bool variables for it which we were initially
> > >using in the patch but that seems to be taking more space in a
> > >structure without any advantage.  Also, using one variable makes a
> > >code bit better because otherwise, in many places we need to check and
> > >set four variables instead of one.  This is also the reason we used
> > >parallel in its name (we also use *parallel* for parallel index scan
> > >related things).  Having said that, we can remove parallel from its
> > >name if we want to extend/use it for something other than a parallel
> > >vacuum.  I think we might need to add a flag or two for parallelizing
> > >heap scan of vacuum when we enhance this feature, so keeping it for
> > >just a parallel vacuum is not completely insane.
> > >
> > >I think keeping amusemaintenanceworkmem separate from this variable
> > >seems to me like a better idea as it doesn't describe whether IndexAM
> > >can participate in a parallel vacuum or not.  You can see more
> > >discussion about that variable in the thread [1].
> > >
> >
> > I don't know, but IMHO it's somewhat easier to work with separate flags.
> > Bitmasks make sense when space usage matters a lot, e.g. for on-disk
> > representation, but that doesn't seem to be the case here I think (if it
> > was, we'd probably use bitmasks already).
> >
> > It seems like we're mixing two ways to design the struct unnecessarily,
> > but I'm not going to nag about this any further.
> >
>
> Fair enough.  I see your point and as mentioned earlier that we
> started with the approach of separate booleans, but later found that
> this is a better way as it was easier to set and check the different
> parallel options for a parallel vacuum.   I think we can go back to
> the individual booleans if we want but I am not sure if that is a
> better approach for this usage.  Sawada-San, others, do you have any
> opinion here?
IMHO, having multiple bools will be confusing compared to what we have
now because these are all related to enabling parallelism for
different phases of the vacuum.  So it makes more sense to keep it as
a single variable with multiple options.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-01 Thread Amit Kapila
On Thu, Jan 2, 2020 at 8:29 AM Masahiko Sawada
 wrote:
>
> On Tue, 31 Dec 2019 at 12:39, Amit Kapila  wrote:
> >
> > On Mon, Dec 30, 2019 at 6:46 PM Tomas Vondra
> >  wrote:
> > >
> > > On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:
> > > >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > > > wrote:
> > > >> I think there's another question we need to ask - why to we introduce a
> > > >> bitmask, instead of using regular boolean struct members? Until now, 
> > > >> the
> > > >> IndexAmRoutine struct had simple boolean members describing 
> > > >> capabilities
> > > >> of the AM implementation. Why shouldn't this patch do the same thing,
> > > >> i.e. add one boolean flag for each AM feature?
> > > >>
> > > >
> > > >This structure member describes mostly one property of index which is
> > > >about a parallel vacuum which I am not sure is true for other members.
> > > >Now, we can use separate bool variables for it which we were initially
> > > >using in the patch but that seems to be taking more space in a
> > > >structure without any advantage.  Also, using one variable makes a
> > > >code bit better because otherwise, in many places we need to check and
> > > >set four variables instead of one.  This is also the reason we used
> > > >parallel in its name (we also use *parallel* for parallel index scan
> > > >related things).  Having said that, we can remove parallel from its
> > > >name if we want to extend/use it for something other than a parallel
> > > >vacuum.  I think we might need to add a flag or two for parallelizing
> > > >heap scan of vacuum when we enhance this feature, so keeping it for
> > > >just a parallel vacuum is not completely insane.
> > > >
> > > >I think keeping amusemaintenanceworkmem separate from this variable
> > > >seems to me like a better idea as it doesn't describe whether IndexAM
> > > >can participate in a parallel vacuum or not.  You can see more
> > > >discussion about that variable in the thread [1].
> > > >
> > >
> > > I don't know, but IMHO it's somewhat easier to work with separate flags.
> > > Bitmasks make sense when space usage matters a lot, e.g. for on-disk
> > > representation, but that doesn't seem to be the case here I think (if it
> > > was, we'd probably use bitmasks already).
> > >
> > > It seems like we're mixing two ways to design the struct unnecessarily,
> > > but I'm not going to nag about this any further.
> > >
> >
> > Fair enough.  I see your point and as mentioned earlier that we
> > started with the approach of separate booleans, but later found that
> > this is a better way as it was easier to set and check the different
> > parallel options for a parallel vacuum.   I think we can go back to
> > the individual booleans if we want but I am not sure if that is a
> > better approach for this usage.  Sawada-San, others, do you have any
> > opinion here?
>
> If we go back to the individual booleans we would end up with having
> three booleans: bulkdelete, cleanup and conditional cleanup. I think
> making the bulkdelete option to a boolean makes sense but having two
> booleans for cleanup and conditional cleanup might be slightly odd
> because these options are exclusive.
>

If we have only three booleans, then we need to check for all three to
conclude that a parallel vacuum is not enabled for any index.
Alternatively, we can have a fourth boolean to indicate that a
parallel vacuum is not enabled.  And in the future, when we allow
supporting multiple workers for an index, we might need another
variable unless we can allow it for all types of indexes.  This was my
point that having multiple variables for the purpose of a parallel
vacuum (for indexes) doesn't sound like a better approach than having
a single uint8 variable.

> If we don't stick to have only
> booleans the having a ternary value for cleanup might be
> understandable but I'm not sure it's better to have it for only vacuum
> purpose.
>

If we want to keep the possibility of extending it for other purposes,
then we can probably rename it to amoptions or something like that.
What do you think?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2020-01-01 Thread Masahiko Sawada
On Tue, 31 Dec 2019 at 12:39, Amit Kapila  wrote:
>
> On Mon, Dec 30, 2019 at 6:46 PM Tomas Vondra
>  wrote:
> >
> > On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:
> > >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > > wrote:
> > >> I think there's another question we need to ask - why to we introduce a
> > >> bitmask, instead of using regular boolean struct members? Until now, the
> > >> IndexAmRoutine struct had simple boolean members describing capabilities
> > >> of the AM implementation. Why shouldn't this patch do the same thing,
> > >> i.e. add one boolean flag for each AM feature?
> > >>
> > >
> > >This structure member describes mostly one property of index which is
> > >about a parallel vacuum which I am not sure is true for other members.
> > >Now, we can use separate bool variables for it which we were initially
> > >using in the patch but that seems to be taking more space in a
> > >structure without any advantage.  Also, using one variable makes a
> > >code bit better because otherwise, in many places we need to check and
> > >set four variables instead of one.  This is also the reason we used
> > >parallel in its name (we also use *parallel* for parallel index scan
> > >related things).  Having said that, we can remove parallel from its
> > >name if we want to extend/use it for something other than a parallel
> > >vacuum.  I think we might need to add a flag or two for parallelizing
> > >heap scan of vacuum when we enhance this feature, so keeping it for
> > >just a parallel vacuum is not completely insane.
> > >
> > >I think keeping amusemaintenanceworkmem separate from this variable
> > >seems to me like a better idea as it doesn't describe whether IndexAM
> > >can participate in a parallel vacuum or not.  You can see more
> > >discussion about that variable in the thread [1].
> > >
> >
> > I don't know, but IMHO it's somewhat easier to work with separate flags.
> > Bitmasks make sense when space usage matters a lot, e.g. for on-disk
> > representation, but that doesn't seem to be the case here I think (if it
> > was, we'd probably use bitmasks already).
> >
> > It seems like we're mixing two ways to design the struct unnecessarily,
> > but I'm not going to nag about this any further.
> >
>
> Fair enough.  I see your point and as mentioned earlier that we
> started with the approach of separate booleans, but later found that
> this is a better way as it was easier to set and check the different
> parallel options for a parallel vacuum.   I think we can go back to
> the individual booleans if we want but I am not sure if that is a
> better approach for this usage.  Sawada-San, others, do you have any
> opinion here?

If we go back to the individual booleans we would end up with having
three booleans: bulkdelete, cleanup and conditional cleanup. I think
making the bulkdelete option to a boolean makes sense but having two
booleans for cleanup and conditional cleanup might be slightly odd
because these options are exclusive. If we don't stick to have only
booleans the having a ternary value for cleanup might be
understandable but I'm not sure it's better to have it for only vacuum
purpose.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-30 Thread Amit Kapila
On Mon, Dec 30, 2019 at 6:37 PM Tomas Vondra
 wrote:
>
> On Mon, Dec 30, 2019 at 10:40:39AM +0530, Amit Kapila wrote:
> >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > wrote:
> >>
> >
> >+1.  It is already a separate patch and I think we can even discuss
> >more on it in a new thread once the main patch is committed or do you
> >think we should have a conclusion about it now itself?  To me, this
> >option appears to be an extension to the main feature which can be
> >useful for some users and people might like to have a separate option,
> >so we can discuss it and get broader feedback after the main patch is
> >committed.
> >
>
> I don't think it's an extension of the main feature - it does not depend
> on it, it could be committed before or after the parallel vacuum (with
> some conflicts, but the feature itself is not affected).
>
> My point was that by moving it into a separate thread we're more likely
> to get feedback on it, e.g. from people who don't feel like reviewing
> the parallel vacuum feature and/or feel intimidated by t100+ messages in
> this thread.
>

I agree with this point.

> >> >>
> >> >> The same thing applies to the PARALLEL flag, added in 0002, BTW. Why do
> >> >> we need a separate VACUUM option, instead of just using the existing
> >> >> max_parallel_maintenance_workers GUC?
> >> >>
> >
> >How will user specify parallel degree?  The parallel degree is helpful
> >because in some cases users can decide how many workers should be
> >launched based on size and type of indexes.
> >
>
> By setting max_maintenance_parallel_workers.
>
> >> >> It's good enough for CREATE INDEX
> >> >> so why not here?
> >> >
> >
> >That is a different feature and I think here users can make a better
> >judgment based on the size of indexes.  Moreover, users have an option
> >to control a parallel degree for 'Create Index' via Alter Table
> > Set (parallel_workers = ) which I am not sure is a good
> >idea for parallel vacuum as the parallelism is more derived from size
> >and type of indexes.  Now, we can think of a similar parameter at the
> >table/index level for parallel vacuum, but I don't see it equally
> >useful in this case.
> >
>
> I'm a bit skeptical about users being able to pick good parallel degree.
> If we (i.e. experienced developers/hackers with quite a bit of
> knowledge) can't come up with a reasonable heuristics, how likely is it
> that a regular user will come up with something better?
>

In this case, it is highly dependent on the number of indexes (as for
each index, we can spawn one worker).   So, it is a bit easier for the
users to specify it.  Now, we can internally also identify the same
and we do that in case the user doesn't specify it, however, that can
easily lead to more resource (CPU, I/O) usage than the user would like
to do for a particular vacuum.  So, giving an option to the user
sounds quite reasonable to me.  Anyway, in case user doesn't specify
the parallel_degree, we are going to select one internally.

> Not sure I understand why "parallel_workers" would not be suitable for
> parallel vacuum? I mean, even for CREATE INDEX it certainly matters the
> size/type of indexes, no?
>

The difference here is that in parallel vacuum each worker can scan a
separate index whereas parallel_workers is more of an option for
scanning heap in parallel.  So, if the size of the heap is bigger,
then increasing that value helps whereas here if there are more number
of indexes on the table, increasing corresponding value for parallel
vacuum can help.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-30 Thread Amit Kapila
On Mon, Dec 30, 2019 at 6:46 PM Tomas Vondra
 wrote:
>
> On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:
> >On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
> > wrote:
> >> I think there's another question we need to ask - why to we introduce a
> >> bitmask, instead of using regular boolean struct members? Until now, the
> >> IndexAmRoutine struct had simple boolean members describing capabilities
> >> of the AM implementation. Why shouldn't this patch do the same thing,
> >> i.e. add one boolean flag for each AM feature?
> >>
> >
> >This structure member describes mostly one property of index which is
> >about a parallel vacuum which I am not sure is true for other members.
> >Now, we can use separate bool variables for it which we were initially
> >using in the patch but that seems to be taking more space in a
> >structure without any advantage.  Also, using one variable makes a
> >code bit better because otherwise, in many places we need to check and
> >set four variables instead of one.  This is also the reason we used
> >parallel in its name (we also use *parallel* for parallel index scan
> >related things).  Having said that, we can remove parallel from its
> >name if we want to extend/use it for something other than a parallel
> >vacuum.  I think we might need to add a flag or two for parallelizing
> >heap scan of vacuum when we enhance this feature, so keeping it for
> >just a parallel vacuum is not completely insane.
> >
> >I think keeping amusemaintenanceworkmem separate from this variable
> >seems to me like a better idea as it doesn't describe whether IndexAM
> >can participate in a parallel vacuum or not.  You can see more
> >discussion about that variable in the thread [1].
> >
>
> I don't know, but IMHO it's somewhat easier to work with separate flags.
> Bitmasks make sense when space usage matters a lot, e.g. for on-disk
> representation, but that doesn't seem to be the case here I think (if it
> was, we'd probably use bitmasks already).
>
> It seems like we're mixing two ways to design the struct unnecessarily,
> but I'm not going to nag about this any further.
>

Fair enough.  I see your point and as mentioned earlier that we
started with the approach of separate booleans, but later found that
this is a better way as it was easier to set and check the different
parallel options for a parallel vacuum.   I think we can go back to
the individual booleans if we want but I am not sure if that is a
better approach for this usage.  Sawada-San, others, do you have any
opinion here?

> >> >>
> >> >>
> >> >> v40-0004-Add-ability-to-disable-leader-participation-in-p.patch
> >> >> ---
> >> >>
> >> >> IMHO this should be simply merged into 0002.
> >> >
> >> >We discussed it's still unclear whether we really want to commit this
> >> >code and therefore it's separated from the main part. Please see more
> >> >details here[2].
> >> >
> >>
> >> IMO there's not much reason for the leader not to participate.
> >>
> >
> >The only reason for this is just a debugging/testing aid because
> >during the development of other parallel features we required such a
> >knob.  The other way could be to have something similar to
> >force_parallel_mode and there is some discussion about that as well on
> >this thread but we haven't concluded which is better.  So, we decided
> >to keep it as a separate patch which we can use to test this feature
> >during development and decide later whether we really need to commit
> >it.  BTW, we have found few bugs by using this knob in the patch.
> >
>
> OK, understood. Then why not just use force_parallel_mode?
>

Because we are not sure what should be its behavior under different
modes especially what should we do when user set force_parallel_mode =
on.  We can even consider introducing new guc specific for this, but
as of now, I am not convinced that is required.  See some more
discussion around this parameter in emails [1][2].  I think we can
decide on this later (probably once the main patch is committed) as we
already have one way to test the patch.

[1] - 
https://www.postgresql.org/message-id/CAFiTN-sUuLASVXm2qOjufVH3tBZHPLdujMJ0RHr47Tnctjk9YA%40mail.gmail.com
[2] - 
https://www.postgresql.org/message-id/CA%2Bfd4k6VgA_DG%3D8%3Dui7UvHhqx9VbQ-%2B72X%3D_GdTzh%3DJ_xN%2BVEg%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-30 Thread Tomas Vondra

On Mon, Dec 30, 2019 at 08:25:28AM +0530, Amit Kapila wrote:

On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
 wrote:


On Sun, Dec 29, 2019 at 10:06:23PM +0900, Masahiko Sawada wrote:
>> v40-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
>> ---
>>
>> I wonder if 'amparallelvacuumoptions' is unnecessarily specific. Maybe
>> it should be called just 'amvacuumoptions' or something like that? The
>> 'parallel' part is actually encoded in names of the options.
>>
>
>amvacuumoptions seems good to me.
>
>> Also, why do we need a separate amusemaintenanceworkmem option? Why
>> don't we simply track it using a separate flag in 'amvacuumoptions'
>> (or whatever we end up calling it)?
>>
>
>It also seems like a good idea.
>

I think there's another question we need to ask - why to we introduce a
bitmask, instead of using regular boolean struct members? Until now, the
IndexAmRoutine struct had simple boolean members describing capabilities
of the AM implementation. Why shouldn't this patch do the same thing,
i.e. add one boolean flag for each AM feature?



This structure member describes mostly one property of index which is
about a parallel vacuum which I am not sure is true for other members.
Now, we can use separate bool variables for it which we were initially
using in the patch but that seems to be taking more space in a
structure without any advantage.  Also, using one variable makes a
code bit better because otherwise, in many places we need to check and
set four variables instead of one.  This is also the reason we used
parallel in its name (we also use *parallel* for parallel index scan
related things).  Having said that, we can remove parallel from its
name if we want to extend/use it for something other than a parallel
vacuum.  I think we might need to add a flag or two for parallelizing
heap scan of vacuum when we enhance this feature, so keeping it for
just a parallel vacuum is not completely insane.

I think keeping amusemaintenanceworkmem separate from this variable
seems to me like a better idea as it doesn't describe whether IndexAM
can participate in a parallel vacuum or not.  You can see more
discussion about that variable in the thread [1].



I don't know, but IMHO it's somewhat easier to work with separate flags.
Bitmasks make sense when space usage matters a lot, e.g. for on-disk
representation, but that doesn't seem to be the case here I think (if it
was, we'd probably use bitmasks already).

It seems like we're mixing two ways to design the struct unnecessarily,
but I'm not going to nag about this any further.


>>
>>
>> v40-0004-Add-ability-to-disable-leader-participation-in-p.patch
>> ---
>>
>> IMHO this should be simply merged into 0002.
>
>We discussed it's still unclear whether we really want to commit this
>code and therefore it's separated from the main part. Please see more
>details here[2].
>

IMO there's not much reason for the leader not to participate.



The only reason for this is just a debugging/testing aid because
during the development of other parallel features we required such a
knob.  The other way could be to have something similar to
force_parallel_mode and there is some discussion about that as well on
this thread but we haven't concluded which is better.  So, we decided
to keep it as a separate patch which we can use to test this feature
during development and decide later whether we really need to commit
it.  BTW, we have found few bugs by using this knob in the patch.



OK, understood. Then why not just use force_parallel_mode?


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-30 Thread Tomas Vondra

On Mon, Dec 30, 2019 at 10:40:39AM +0530, Amit Kapila wrote:

On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
 wrote:


On Sun, Dec 29, 2019 at 10:06:23PM +0900, Masahiko Sawada wrote:
>>
>> v40-0003-Add-FAST-option-to-vacuum-command.patch
>> 
>>
>> I do have a bit of an issue with this part - I'm not quite convinved we
>> actually need a FAST option, and I actually suspect we'll come to regret
>> it sooner than later. AFAIK it pretty much does exactly the same thing
>> as setting vacuum_cost_delay to 0, and IMO it's confusing to provide
>> multiple ways to do the same thing - I do expect reports from confused
>> users on pgsql-bugs etc. Why is setting vacuum_cost_delay directly not a
>> sufficient solution?
>
>I think the motivation of this option is similar to FREEZE. I think
>it's sometimes a good idea to have a shortcut of popular usage and
>make it have an name corresponding to its job. From that perspective I
>think having FAST option would make sense but maybe we need more
>discussion the combination parallel vacuum and vacuum delay.
>

OK. I think it's mostly independent piece, so maybe we should move it to
a separate patch. It's more likely to get attention/feedback when not
buried in this thread.



+1.  It is already a separate patch and I think we can even discuss
more on it in a new thread once the main patch is committed or do you
think we should have a conclusion about it now itself?  To me, this
option appears to be an extension to the main feature which can be
useful for some users and people might like to have a separate option,
so we can discuss it and get broader feedback after the main patch is
committed.



I don't think it's an extension of the main feature - it does not depend
on it, it could be committed before or after the parallel vacuum (with
some conflicts, but the feature itself is not affected).

My point was that by moving it into a separate thread we're more likely
to get feedback on it, e.g. from people who don't feel like reviewing
the parallel vacuum feature and/or feel intimidated by t100+ messages in
this thread.


>>
>> The same thing applies to the PARALLEL flag, added in 0002, BTW. Why do
>> we need a separate VACUUM option, instead of just using the existing
>> max_parallel_maintenance_workers GUC?
>>


How will user specify parallel degree?  The parallel degree is helpful
because in some cases users can decide how many workers should be
launched based on size and type of indexes.



By setting max_maintenance_parallel_workers.


>> It's good enough for CREATE INDEX
>> so why not here?
>


That is a different feature and I think here users can make a better
judgment based on the size of indexes.  Moreover, users have an option
to control a parallel degree for 'Create Index' via Alter Table
 Set (parallel_workers = ) which I am not sure is a good
idea for parallel vacuum as the parallelism is more derived from size
and type of indexes.  Now, we can think of a similar parameter at the
table/index level for parallel vacuum, but I don't see it equally
useful in this case.



I'm a bit skeptical about users being able to pick good parallel degree.
If we (i.e. experienced developers/hackers with quite a bit of
knowledge) can't come up with a reasonable heuristics, how likely is it
that a regular user will come up with something better?

Not sure I understand why "parallel_workers" would not be suitable for
parallel vacuum? I mean, even for CREATE INDEX it certainly matters the
size/type of indexes, no?

I may be wrong in both cases, of course.


>AFAIR There was no such discussion so far but I think one reason could
>be that parallel vacuum should be disabled by default. If the parallel
>vacuum uses max_parallel_maintenance_workers (2 by default) rather
>than having the option the parallel vacuum would work with default
>setting but I think that it would become a big impact for user because
>the disk access could become random reads and writes when some indexes
>are on the same tablespace.
>

I'm not quite convinced VACUUM should have parallelism disabled by
default. I know some people argued we should do that because making
vacuum faster may put pressure on other parts of the system. Which is
true, but I don't think the solution is to make vacuum slower by
default. IMHO we should do the opposite - have it parallel by default
(as driven by max_parallel_maintenance_workers), and have an option
to disable parallelism.



I think driving parallelism for vacuum by
max_parallel_maintenance_workers might not be sufficient.  We need to
give finer control as it depends a lot on the size of indexes. Also,
unlike Create Index, Vacuum can be performed on an entire database and
it is quite possible that some tables/indexes are relatively smaller
and forcing parallelism on them by default might slow down the
operation.



Why wouldn't it be sufficient? Why couldn't this use similar logic to
what we have in plan_create_index_workers for 

Re: [HACKERS] Block level parallel vacuum

2019-12-29 Thread Amit Kapila
On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
 wrote:
>
> On Sun, Dec 29, 2019 at 10:06:23PM +0900, Masahiko Sawada wrote:
> >>
> >> v40-0003-Add-FAST-option-to-vacuum-command.patch
> >> 
> >>
> >> I do have a bit of an issue with this part - I'm not quite convinved we
> >> actually need a FAST option, and I actually suspect we'll come to regret
> >> it sooner than later. AFAIK it pretty much does exactly the same thing
> >> as setting vacuum_cost_delay to 0, and IMO it's confusing to provide
> >> multiple ways to do the same thing - I do expect reports from confused
> >> users on pgsql-bugs etc. Why is setting vacuum_cost_delay directly not a
> >> sufficient solution?
> >
> >I think the motivation of this option is similar to FREEZE. I think
> >it's sometimes a good idea to have a shortcut of popular usage and
> >make it have an name corresponding to its job. From that perspective I
> >think having FAST option would make sense but maybe we need more
> >discussion the combination parallel vacuum and vacuum delay.
> >
>
> OK. I think it's mostly independent piece, so maybe we should move it to
> a separate patch. It's more likely to get attention/feedback when not
> buried in this thread.
>

+1.  It is already a separate patch and I think we can even discuss
more on it in a new thread once the main patch is committed or do you
think we should have a conclusion about it now itself?  To me, this
option appears to be an extension to the main feature which can be
useful for some users and people might like to have a separate option,
so we can discuss it and get broader feedback after the main patch is
committed.

> >>
> >> The same thing applies to the PARALLEL flag, added in 0002, BTW. Why do
> >> we need a separate VACUUM option, instead of just using the existing
> >> max_parallel_maintenance_workers GUC?
> >>

How will user specify parallel degree?  The parallel degree is helpful
because in some cases users can decide how many workers should be
launched based on size and type of indexes.

> >> It's good enough for CREATE INDEX
> >> so why not here?
> >

That is a different feature and I think here users can make a better
judgment based on the size of indexes.  Moreover, users have an option
to control a parallel degree for 'Create Index' via Alter Table
 Set (parallel_workers = ) which I am not sure is a good
idea for parallel vacuum as the parallelism is more derived from size
and type of indexes.  Now, we can think of a similar parameter at the
table/index level for parallel vacuum, but I don't see it equally
useful in this case.

> >AFAIR There was no such discussion so far but I think one reason could
> >be that parallel vacuum should be disabled by default. If the parallel
> >vacuum uses max_parallel_maintenance_workers (2 by default) rather
> >than having the option the parallel vacuum would work with default
> >setting but I think that it would become a big impact for user because
> >the disk access could become random reads and writes when some indexes
> >are on the same tablespace.
> >
>
> I'm not quite convinced VACUUM should have parallelism disabled by
> default. I know some people argued we should do that because making
> vacuum faster may put pressure on other parts of the system. Which is
> true, but I don't think the solution is to make vacuum slower by
> default. IMHO we should do the opposite - have it parallel by default
> (as driven by max_parallel_maintenance_workers), and have an option
> to disable parallelism.
>

I think driving parallelism for vacuum by
max_parallel_maintenance_workers might not be sufficient.  We need to
give finer control as it depends a lot on the size of indexes. Also,
unlike Create Index, Vacuum can be performed on an entire database and
it is quite possible that some tables/indexes are relatively smaller
and forcing parallelism on them by default might slow down the
operation.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-29 Thread Amit Kapila
On Mon, Dec 30, 2019 at 2:53 AM Tomas Vondra
 wrote:
>
> On Sun, Dec 29, 2019 at 10:06:23PM +0900, Masahiko Sawada wrote:
> >> v40-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
> >> ---
> >>
> >> I wonder if 'amparallelvacuumoptions' is unnecessarily specific. Maybe
> >> it should be called just 'amvacuumoptions' or something like that? The
> >> 'parallel' part is actually encoded in names of the options.
> >>
> >
> >amvacuumoptions seems good to me.
> >
> >> Also, why do we need a separate amusemaintenanceworkmem option? Why
> >> don't we simply track it using a separate flag in 'amvacuumoptions'
> >> (or whatever we end up calling it)?
> >>
> >
> >It also seems like a good idea.
> >
>
> I think there's another question we need to ask - why to we introduce a
> bitmask, instead of using regular boolean struct members? Until now, the
> IndexAmRoutine struct had simple boolean members describing capabilities
> of the AM implementation. Why shouldn't this patch do the same thing,
> i.e. add one boolean flag for each AM feature?
>

This structure member describes mostly one property of index which is
about a parallel vacuum which I am not sure is true for other members.
Now, we can use separate bool variables for it which we were initially
using in the patch but that seems to be taking more space in a
structure without any advantage.  Also, using one variable makes a
code bit better because otherwise, in many places we need to check and
set four variables instead of one.  This is also the reason we used
parallel in its name (we also use *parallel* for parallel index scan
related things).  Having said that, we can remove parallel from its
name if we want to extend/use it for something other than a parallel
vacuum.  I think we might need to add a flag or two for parallelizing
heap scan of vacuum when we enhance this feature, so keeping it for
just a parallel vacuum is not completely insane.

I think keeping amusemaintenanceworkmem separate from this variable
seems to me like a better idea as it doesn't describe whether IndexAM
can participate in a parallel vacuum or not.  You can see more
discussion about that variable in the thread [1].

> >>
> >>
> >> v40-0004-Add-ability-to-disable-leader-participation-in-p.patch
> >> ---
> >>
> >> IMHO this should be simply merged into 0002.
> >
> >We discussed it's still unclear whether we really want to commit this
> >code and therefore it's separated from the main part. Please see more
> >details here[2].
> >
>
> IMO there's not much reason for the leader not to participate.
>

The only reason for this is just a debugging/testing aid because
during the development of other parallel features we required such a
knob.  The other way could be to have something similar to
force_parallel_mode and there is some discussion about that as well on
this thread but we haven't concluded which is better.  So, we decided
to keep it as a separate patch which we can use to test this feature
during development and decide later whether we really need to commit
it.  BTW, we have found few bugs by using this knob in the patch.

[1] - 
https://www.postgresql.org/message-id/caa4ek1lmcd5apogzwim5nn58ki+74a6edghx4wd8haskvha...@mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-29 Thread Tomas Vondra

On Sun, Dec 29, 2019 at 10:06:23PM +0900, Masahiko Sawada wrote:

On Fri, 27 Dec 2019 at 11:24, Tomas Vondra  wrote:


Hi,

On Wed, Dec 25, 2019 at 09:17:16PM +0900, Masahiko Sawada wrote:
>On Tue, 24 Dec 2019 at 15:46, Masahiko Sawada
> wrote:
>>
>> On Tue, 24 Dec 2019 at 15:44, Amit Kapila  wrote:
>> >
>> > On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
>> >  wrote:
>> > >
>> > >
>> > > The first patches look good to me. I'm reviewing other patches and
>> > > will post comments if there is.
>> > >
>>
>> Oops I meant first "two" patches look good to me.
>>
>> >
>> > Okay, feel free to address few comments raised by Mahendra along with
>> > whatever you find.
>>
>> Thanks!
>>
>
>I've attached updated patch set as the previous version patch set
>conflicts to the current HEAD. This patch set incorporated the review
>comments, a few fix and the patch for
>PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION. 0001 patch is the same
>as previous version.
>

I've been reviewing the updated patches over the past couple of days, so
let me share some initial review comments. I initially started to read
the thread, but then I realized it's futile - the thread is massive, and
the patch changed so much re-reading the whole thread is a waste of time.


Thank you for reviewing this patch!



It might be useful write a summary of the current design, but AFAICS the
original plan to parallelize the heap scan is abandoned and we now do
just the steps that vacuum indexes in parallel. Which is fine, but it
means the subject "block level parallel vacuum" is a bit misleading.



Yeah I should have renamed it. I'll summarize the current design.



OK


Anyway, most of the logic is implemented in part 0002, which actually
does all the parallel worker stuff. The remaining parts 0001, 0003 and
0004 are either preparing infrastructure or not directlyrelated to the
primary feature.


v40-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
---

I wonder if 'amparallelvacuumoptions' is unnecessarily specific. Maybe
it should be called just 'amvacuumoptions' or something like that? The
'parallel' part is actually encoded in names of the options.



amvacuumoptions seems good to me.


Also, why do we need a separate amusemaintenanceworkmem option? Why
don't we simply track it using a separate flag in 'amvacuumoptions'
(or whatever we end up calling it)?



It also seems like a good idea.



I think there's another question we need to ask - why to we introduce a
bitmask, instead of using regular boolean struct members? Until now, the
IndexAmRoutine struct had simple boolean members describing capabilities
of the AM implementation. Why shouldn't this patch do the same thing,
i.e. add one boolean flag for each AM feature?


Would it make sense to track m_w_m usage separately for the two index
cleanup phases? Or is that unnecessary / pointless?


We could do that but currently index AM uses this option is only gin
indexes. And gin indexes could use maintenance_work_mem both during
bulkdelete and cleanup. So it might be unnecessary at least as of now.



OK




v40-0002-Add-a-parallel-option-to-the-VACUUM-command.patch
--

I haven't found any issues yet, but I've only started with the code
review. I'll continue with the review. It seems in a fairly good shape
though, I think, I only have two minor comments at the moment:

- The SizeOfLVDeadTuples macro seems rather weird. It does include space
   for one ItemPointerData, but we really need an array of them. But then
   all the places where the macro is used explicitly add space for the
   pointers, so the sizeof(ItemPointerData) seems unnecessary. So it
   should be either

#define SizeOfLVDeadTuples (offsetof(LVDeadTuples, itemptrs))

   or

#define SizeOfLVDeadTuples(cnt) \
   (offsetof(LVDeadTuples, itemptrs) + (cnt) * sizeof(ItemPointerData))

   in which case the callers can be simplified.


Fixed it to the former.



Hmmm, I'd actually suggest to use the latter variant, because it allows
simplifying the callers. Just translating it to offsetof() is not saving
much code, I think.



- It's not quite clear to me why we need the new nworkers_to_launch
   field in ParallelContext.


The motivation of nworkers_to_launch is to specify the number of
workers to actually launch when we use the same parallel context
several times while changing the number of workers to launch. Since
index AM can choose the participation of bulkdelete and/or cleanup,
the number of workers required for each vacuum phrases can be
different. I originally changed LaunchParallelWorkers to have the
number of workers to launch so that it launches different number of
workers for each vacuum phases but Robert suggested to change the
routine of reinitializing parallel context[1]. It would be less
confusing and would involve modify code in a lot fewer places. So with
this patch we specify the number of workers 

Re: [HACKERS] Block level parallel vacuum

2019-12-26 Thread Tomas Vondra

Hi,

On Wed, Dec 25, 2019 at 09:17:16PM +0900, Masahiko Sawada wrote:

On Tue, 24 Dec 2019 at 15:46, Masahiko Sawada
 wrote:


On Tue, 24 Dec 2019 at 15:44, Amit Kapila  wrote:
>
> On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
>  wrote:
> >
> >
> > The first patches look good to me. I'm reviewing other patches and
> > will post comments if there is.
> >

Oops I meant first "two" patches look good to me.

>
> Okay, feel free to address few comments raised by Mahendra along with
> whatever you find.

Thanks!



I've attached updated patch set as the previous version patch set
conflicts to the current HEAD. This patch set incorporated the review
comments, a few fix and the patch for
PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION. 0001 patch is the same
as previous version.



I've been reviewing the updated patches over the past couple of days, so
let me share some initial review comments. I initially started to read
the thread, but then I realized it's futile - the thread is massive, and
the patch changed so much re-reading the whole thread is a waste of time.

It might be useful write a summary of the current design, but AFAICS the
original plan to parallelize the heap scan is abandoned and we now do
just the steps that vacuum indexes in parallel. Which is fine, but it
means the subject "block level parallel vacuum" is a bit misleading.

Anyway, most of the logic is implemented in part 0002, which actually
does all the parallel worker stuff. The remaining parts 0001, 0003 and
0004 are either preparing infrastructure or not directlyrelated to the
primary feature.


v40-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
---

I wonder if 'amparallelvacuumoptions' is unnecessarily specific. Maybe
it should be called just 'amvacuumoptions' or something like that? The
'parallel' part is actually encoded in names of the options.

Also, why do we need a separate amusemaintenanceworkmem option? Why
don't we simply track it using a separate flag in 'amvacuumoptions'
(or whatever we end up calling it)?

Would it make sense to track m_w_m usage separately for the two index
cleanup phases? Or is that unnecessary / pointless?


v40-0002-Add-a-parallel-option-to-the-VACUUM-command.patch
--

I haven't found any issues yet, but I've only started with the code
review. I'll continue with the review. It seems in a fairly good shape
though, I think, I only have two minor comments at the moment:

- The SizeOfLVDeadTuples macro seems rather weird. It does include space
  for one ItemPointerData, but we really need an array of them. But then
  all the places where the macro is used explicitly add space for the
  pointers, so the sizeof(ItemPointerData) seems unnecessary. So it
  should be either

#define SizeOfLVDeadTuples (offsetof(LVDeadTuples, itemptrs))

  or 


#define SizeOfLVDeadTuples(cnt) \
  (offsetof(LVDeadTuples, itemptrs) + (cnt) * sizeof(ItemPointerData))

  in which case the callers can be simplified.

- It's not quite clear to me why we need the new nworkers_to_launch
  field in ParallelContext.


v40-0003-Add-FAST-option-to-vacuum-command.patch


I do have a bit of an issue with this part - I'm not quite convinved we
actually need a FAST option, and I actually suspect we'll come to regret
it sooner than later. AFAIK it pretty much does exactly the same thing
as setting vacuum_cost_delay to 0, and IMO it's confusing to provide
multiple ways to do the same thing - I do expect reports from confused
users on pgsql-bugs etc. Why is setting vacuum_cost_delay directly not a
sufficient solution?

The same thing applies to the PARALLEL flag, added in 0002, BTW. Why do
we need a separate VACUUM option, instead of just using the existing
max_parallel_maintenance_workers GUC? It's good enough for CREATE INDEX
so why not here?

Maybe it's explained somewhere deep in the thread, of course ...


v40-0004-Add-ability-to-disable-leader-participation-in-p.patch
---

IMHO this should be simply merged into 0002.


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-26 Thread Mahendra Singh
On Wed, 25 Dec 2019 at 17:47, Masahiko Sawada
 wrote:
>
> On Tue, 24 Dec 2019 at 15:46, Masahiko Sawada
>  wrote:
> >
> > On Tue, 24 Dec 2019 at 15:44, Amit Kapila  wrote:
> > >
> > > On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
> > >  wrote:
> > > >
> > > >
> > > > The first patches look good to me. I'm reviewing other patches and
> > > > will post comments if there is.
> > > >
> >
> > Oops I meant first "two" patches look good to me.
> >
> > >
> > > Okay, feel free to address few comments raised by Mahendra along with
> > > whatever you find.
> >
> > Thanks!
> >
>
> I've attached updated patch set as the previous version patch set
> conflicts to the current HEAD. This patch set incorporated the review
> comments, a few fix and the patch for
> PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION. 0001 patch is the same
> as previous version.

I verified my all review comments in v40 patch set. All are fixed.

v40-0002-Add-a-parallel-option-to-the-VACUUM-command.patch doesn't
apply on HEAD. Please send re-based patch.

Thanks and Regards
Mahendra Singh Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-25 Thread Masahiko Sawada
On Tue, 24 Dec 2019 at 15:46, Masahiko Sawada
 wrote:
>
> On Tue, 24 Dec 2019 at 15:44, Amit Kapila  wrote:
> >
> > On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
> >  wrote:
> > >
> > >
> > > The first patches look good to me. I'm reviewing other patches and
> > > will post comments if there is.
> > >
>
> Oops I meant first "two" patches look good to me.
>
> >
> > Okay, feel free to address few comments raised by Mahendra along with
> > whatever you find.
>
> Thanks!
>

I've attached updated patch set as the previous version patch set
conflicts to the current HEAD. This patch set incorporated the review
comments, a few fix and the patch for
PARALLEL_VACUUM_DISABLE_LEADER_PARTICIPATION. 0001 patch is the same
as previous version.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v40-0004-Add-ability-to-disable-leader-participation-in-p.patch
Description: Binary data


v40-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch
Description: Binary data


v40-0003-Add-FAST-option-to-vacuum-command.patch
Description: Binary data


v40-0002-Add-a-parallel-option-to-the-VACUUM-command.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Masahiko Sawada
On Tue, 24 Dec 2019 at 15:44, Amit Kapila  wrote:
>
> On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
>  wrote:
> >
> >
> > The first patches look good to me. I'm reviewing other patches and
> > will post comments if there is.
> >

Oops I meant first "two" patches look good to me.

>
> Okay, feel free to address few comments raised by Mahendra along with
> whatever you find.

Thanks!

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Amit Kapila
On Tue, Dec 24, 2019 at 12:08 PM Masahiko Sawada
 wrote:
>
>
> The first patches look good to me. I'm reviewing other patches and
> will post comments if there is.
>

Okay, feel free to address few comments raised by Mahendra along with
whatever you find.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Masahiko Sawada
On Mon, 23 Dec 2019 at 19:41, Amit Kapila  wrote:
>
> On Fri, Dec 20, 2019 at 12:13 PM Masahiko Sawada
>  wrote:
> >
> > I've attached the updated version patch that incorporated the all
> > review comments I go so far.
> >
>
> I have further edited the first two patches posted by you.  The
> changes include (a) changed tests to reset the guc, (b) removing some
> stuff which is not required in this version, (c) moving some variables
> around to make them in better order, (d) changed comments and few
> other cosmetic things and (e) commit messages for first two patches.
>
> I think the first two patches attached in this email are in good shape
> and we can commit those unless you or someone has more comments on
> them, the main parallel vacuum patch can still be improved by some
> more test/polish/review.  I am planning to push the first two patches
> next week after another pass.  The first two patches are explained in
> brief as below:
>
> 1. v4-0001-Delete-empty-pages-in-each-pass-during-GIST-VACUUM:  It
> allows us to delete empty pages in each pass during GIST VACUUM.
> Earlier, we use to postpone deleting empty pages till the second stage
> of vacuum to amortize the cost of scanning internal pages.  However,
> that can sometimes (say vacuum is canceled or errored between first
> and second stage) delay the pages to be recycled.  Another thing is
> that to facilitate deleting empty pages in the second stage, we need
> to share the information of internal and empty pages between different
> stages of vacuum.  It will be quite tricky to share this information
> via DSM which is required for the main parallel vacuum patch.  Also,
> it will bring the logic to reclaim deleted pages closer to nbtree
> where we delete empty pages in each pass.  Overall, the advantages of
> deleting empty pages in each pass outweigh the advantages of
> postponing the same.  This patch is discussed in detail in a separate
> thread [1].
>
> 2. v39-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch:
> Introduce new fields amusemaintenanceworkmem and
> amparallelvacuumoptions in IndexAmRoutine for parallel vacuum.  The
> amusemaintenanceworkmem tells whether a particular IndexAM uses
> maintenance_work_mem or not.  This will help in controlling the memory
> used by individual workers as otherwise, each worker can consume
> memory equal to maintenance_work_mem.  This has been discussed in
> detail in a separate thread as well [2]. The amparallelvacuumoptions
> tell whether a particular IndexAM participates in a parallel vacuum
> and if so in which phase (bulkdelete, vacuumcleanup) of vacuum.
>
>

Thank you for updating the patches!

The first patches look good to me. I'm reviewing other patches and
will post comments if there is.

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Amit Kapila
On Mon, Dec 23, 2019 at 11:02 PM Mahendra Singh  wrote:
>
> 5. I am not sure that I am right but I can see that we are not consistent 
> while ending the single line comments.
>
> I think, if single line comment is started with "upper case letter", then we 
> should not put period(dot) at the end of comment, but if comment started with 
> "lower case letter", then we should put period(dot) at the end of comment.
>
> a)
> + /* parallel vacuum must be active */
> I think. we should end above comment with dot or we should make "p" of 
> parallel as upper case letter.
>
> b)
> + /* At least count itself */
> I think, above is correct.
>

I have checked a few files in this context and I don't see any
consistency, so I would suggest keeping the things matching with the
nearby code.  Do you have any reason for the above conclusion?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Mahendra Singh
g_indg_On Mon, 23 Dec 2019 at 16:11, Amit Kapila 
wrote:
>
> On Fri, Dec 20, 2019 at 12:13 PM Masahiko Sawada
>  wrote:
> >
> > I've attached the updated version patch that incorporated the all
> > review comments I go so far.
> >
>
> I have further edited the first two patches posted by you.  The
> changes include (a) changed tests to reset the guc, (b) removing some
> stuff which is not required in this version, (c) moving some variables
> around to make them in better order, (d) changed comments and few
> other cosmetic things and (e) commit messages for first two patches.
>
> I think the first two patches attached in this email are in good shape
> and we can commit those unless you or someone has more comments on
> them, the main parallel vacuum patch can still be improved by some
> more test/polish/review.  I am planning to push the first two patches
> next week after another pass.  The first two patches are explained in
> brief as below:
>
> 1. v4-0001-Delete-empty-pages-in-each-pass-during-GIST-VACUUM:  It
> allows us to delete empty pages in each pass during GIST VACUUM.
> Earlier, we use to postpone deleting empty pages till the second stage
> of vacuum to amortize the cost of scanning internal pages.  However,
> that can sometimes (say vacuum is canceled or errored between first
> and second stage) delay the pages to be recycled.  Another thing is
> that to facilitate deleting empty pages in the second stage, we need
> to share the information of internal and empty pages between different
> stages of vacuum.  It will be quite tricky to share this information
> via DSM which is required for the main parallel vacuum patch.  Also,
> it will bring the logic to reclaim deleted pages closer to nbtree
> where we delete empty pages in each pass.  Overall, the advantages of
> deleting empty pages in each pass outweigh the advantages of
> postponing the same.  This patch is discussed in detail in a separate
> thread [1].
>
> 2. v39-0001-Introduce-IndexAM-fields-for-parallel-vacuum.patch:
> Introduce new fields amusemaintenanceworkmem and
> amparallelvacuumoptions in IndexAmRoutine for parallel vacuum.  The
> amusemaintenanceworkmem tells whether a particular IndexAM uses
> maintenance_work_mem or not.  This will help in controlling the memory
> used by individual workers as otherwise, each worker can consume
> memory equal to maintenance_work_mem.  This has been discussed in
> detail in a separate thread as well [2]. The amparallelvacuumoptions
> tell whether a particular IndexAM participates in a parallel vacuum
> and if so in which phase (bulkdelete, vacuumcleanup) of vacuum.
>
>
> [1] -
https://www.postgresql.org/message-id/CAA4eK1LGr%2BMN0xHZpJ2dfS8QNQ1a_aROKowZB%2BMPNep8FVtwAA%40mail.gmail.com
> [2] -
https://www.postgresql.org/message-id/caa4ek1lmcd5apogzwim5nn58ki+74a6edghx4wd8haskvha...@mail.gmail.com
>

Hi,
I reviewed v39 patch set. Below are the some minor review comments:

1.
+ * memory equal to maitenance_work_mem, the new maitenance_work_mem for

maitenance_work_mem should be replaced by maintenance_work_mem.

2.
+ * The number of workers can vary between and bulkdelete and cleanup

I think, grammatically above sentence is not correct. "and" is extra in
above sentence.

3.
+ /*
+ * Open table.  The lock mode is the same as the leader process.  It's
+ * okay because The lockmode does not conflict among the parallel workers.
+ */

I think, "lock mode" and "lockmode", both should be same.(means extra space
should be removed from "lock mode"). In "The", "T" should be small case
letter.

4.
+ /* We don't support parallel vacuum for autovacuum for now */

I think, above sentence should be like "As of now, we don't support
parallel vacuum for autovacuum"

5. I am not sure that I am right but I can see that we are not consistent
while ending the single line comments.

I think, if single line comment is started with "upper case letter", then
we should not put period(dot) at the end of comment, but if comment started
with "lower case letter", then we should put period(dot) at the end of
comment.

a)
+ /* parallel vacuum must be active */
I think. we should end above comment with dot or we should make "p" of
parallel as upper case letter.

b)
+ /* At least count itself */
I think, above is correct.

If my understanding is correct, then please let me know so that I can make
these changes on the top of v39 patch set.

6.
+boolamusemaintenanceworkmem;

I think, we haven't ran pgindent.

Thanks and Regards
Mahendra Thalor
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2019-12-23 Thread Masahiko Sawada
On Mon, 23 Dec 2019 at 16:24, Mahendra Singh  wrote:
>
> On Fri, 20 Dec 2019 at 17:17, Prabhat Sahu
>  wrote:
> >
> > Hi,
> >
> > While testing this feature with parallel vacuum on "TEMPORARY TABLE", I got 
> > a server crash on PG Head+V36_patch.
> > Changed configuration parameters and Stack trace are as below:
> >
> > autovacuum = on
> > max_worker_processes = 4
> > shared_buffers = 10MB
> > max_parallel_workers = 8
> > max_parallel_maintenance_workers = 8
> > vacuum_cost_limit = 2000
> > vacuum_cost_delay = 10
> > min_parallel_table_scan_size = 8MB
> > min_parallel_index_scan_size = 0
> >
> > -- Stack trace:
> > [centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.1399 postgres
> > Reading symbols from 
> > /home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
> > [New LWP 1399]
> > [Thread debugging using libthread_db enabled]
> > Using host libthread_db library "/lib64/libthread_db.so.1".
> > Core was generated by `postgres: autovacuum worker   postgres   
> >'.
> > Program terminated with signal 6, Aborted.
> > #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> > Missing separate debuginfos, use: debuginfo-install 
> > glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 
> > krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 
> > libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 
> > openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 
> > zlib-1.2.7-18.el7.x86_64
> > (gdb) bt
> > #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> > #1  0x7f4517d81a28 in abort () from /lib64/libc.so.6
> > #2  0x00a96341 in ExceptionalCondition (conditionName=0xd18efb 
> > "strvalue != NULL", errorType=0xd18eeb "FailedAssertion",
> > fileName=0xd18ee0 "snprintf.c", lineNumber=442) at assert.c:67
> > #3  0x00b02522 in dopr (target=0x7ffdb0e38450, format=0xc5fa95 
> > ".%s\"", args=0x7ffdb0e38538) at snprintf.c:442
> > #4  0x00b01ea6 in pg_vsnprintf (str=0x256df50 "autovacuum: dropping 
> > orphan temp table \"postgres.", '\177' ..., count=1024,
> > fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> > args=0x7ffdb0e38538) at snprintf.c:195
> > #5  0x00afbadf in pvsnprintf (buf=0x256df50 "autovacuum: dropping 
> > orphan temp table \"postgres.", '\177' ..., len=1024,
> > fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> > args=0x7ffdb0e38538) at psprintf.c:110
> > #6  0x00afd34b in appendStringInfoVA (str=0x7ffdb0e38550, 
> > fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> > args=0x7ffdb0e38538)
> > at stringinfo.c:149
> > #7  0x00a970fd in errmsg (fmt=0xc5fa68 "autovacuum: dropping orphan 
> > temp table \"%s.%s.%s\"") at elog.c:832
> > #8  0x008588d2 in do_autovacuum () at autovacuum.c:2249
> > #9  0x00857b29 in AutoVacWorkerMain (argc=0, argv=0x0) at 
> > autovacuum.c:1689
> > #10 0x0085772f in StartAutoVacWorker () at autovacuum.c:1483
> > #11 0x0086e64f in StartAutovacuumWorker () at postmaster.c:5562
> > #12 0x0086e106 in sigusr1_handler (postgres_signal_arg=10) at 
> > postmaster.c:5279
> > #13 
> > #14 0x7f4517e3f933 in __select_nocancel () from /lib64/libc.so.6
> > #15 0x00869838 in ServerLoop () at postmaster.c:1691
> > #16 0x00869212 in PostmasterMain (argc=3, argv=0x256bd70) at 
> > postmaster.c:1400
> > #17 0x0077855d in main (argc=3, argv=0x256bd70) at main.c:210
> > (gdb)
> >
> > I have tried to reproduce the same with all previously executed queries but 
> > now I am not able to reproduce the same.
>
> Thanks Prabhat for reporting this issue.
>
> I am able to reproduce this issue at my end. I tested and verified
> that this issue is not related to parallel vacuum patch. I am able to
> reproduce this issue on HEAD without parallel vacuum patch(v37).
>
> I will report this issue in new thread with reproducible test case.

Thank you so much!

Regards,

-- 
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: [HACKERS] Block level parallel vacuum

2019-12-22 Thread Mahendra Singh
On Fri, 20 Dec 2019 at 17:17, Prabhat Sahu
 wrote:
>
> Hi,
>
> While testing this feature with parallel vacuum on "TEMPORARY TABLE", I got a 
> server crash on PG Head+V36_patch.
> Changed configuration parameters and Stack trace are as below:
>
> autovacuum = on
> max_worker_processes = 4
> shared_buffers = 10MB
> max_parallel_workers = 8
> max_parallel_maintenance_workers = 8
> vacuum_cost_limit = 2000
> vacuum_cost_delay = 10
> min_parallel_table_scan_size = 8MB
> min_parallel_index_scan_size = 0
>
> -- Stack trace:
> [centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.1399 postgres
> Reading symbols from 
> /home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
> [New LWP 1399]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `postgres: autovacuum worker   postgres 
>  '.
> Program terminated with signal 6, Aborted.
> #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 
> keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 
> libcom_err-1.42.9-16.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 
> libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 
> pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64
> (gdb) bt
> #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> #1  0x7f4517d81a28 in abort () from /lib64/libc.so.6
> #2  0x00a96341 in ExceptionalCondition (conditionName=0xd18efb 
> "strvalue != NULL", errorType=0xd18eeb "FailedAssertion",
> fileName=0xd18ee0 "snprintf.c", lineNumber=442) at assert.c:67
> #3  0x00b02522 in dopr (target=0x7ffdb0e38450, format=0xc5fa95 
> ".%s\"", args=0x7ffdb0e38538) at snprintf.c:442
> #4  0x00b01ea6 in pg_vsnprintf (str=0x256df50 "autovacuum: dropping 
> orphan temp table \"postgres.", '\177' ..., count=1024,
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> args=0x7ffdb0e38538) at snprintf.c:195
> #5  0x00afbadf in pvsnprintf (buf=0x256df50 "autovacuum: dropping 
> orphan temp table \"postgres.", '\177' ..., len=1024,
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> args=0x7ffdb0e38538) at psprintf.c:110
> #6  0x00afd34b in appendStringInfoVA (str=0x7ffdb0e38550, 
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"", 
> args=0x7ffdb0e38538)
> at stringinfo.c:149
> #7  0x00a970fd in errmsg (fmt=0xc5fa68 "autovacuum: dropping orphan 
> temp table \"%s.%s.%s\"") at elog.c:832
> #8  0x008588d2 in do_autovacuum () at autovacuum.c:2249
> #9  0x00857b29 in AutoVacWorkerMain (argc=0, argv=0x0) at 
> autovacuum.c:1689
> #10 0x0085772f in StartAutoVacWorker () at autovacuum.c:1483
> #11 0x0086e64f in StartAutovacuumWorker () at postmaster.c:5562
> #12 0x0086e106 in sigusr1_handler (postgres_signal_arg=10) at 
> postmaster.c:5279
> #13 
> #14 0x7f4517e3f933 in __select_nocancel () from /lib64/libc.so.6
> #15 0x00869838 in ServerLoop () at postmaster.c:1691
> #16 0x00869212 in PostmasterMain (argc=3, argv=0x256bd70) at 
> postmaster.c:1400
> #17 0x0077855d in main (argc=3, argv=0x256bd70) at main.c:210
> (gdb)
>
> I have tried to reproduce the same with all previously executed queries but 
> now I am not able to reproduce the same.

Thanks Prabhat for reporting this issue.

I am able to reproduce this issue at my end. I tested and verified
that this issue is not related to parallel vacuum patch. I am able to
reproduce this issue on HEAD without parallel vacuum patch(v37).

I will report this issue in new thread with reproducible test case.

Thanks and Regards
Mahendra Thalor
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-21 Thread Amit Kapila
On Fri, Dec 20, 2019 at 5:17 PM Prabhat Sahu 
wrote:

> Hi,
>
> While testing this feature with parallel vacuum on "TEMPORARY TABLE", I
> got a server crash on PG Head+V36_patch.
>

>From the call stack, it is not clear whether it is related to a patch at
all.  Have you checked your test with and without the patch?  The reason is
that the patch doesn't perform a parallel vacuum on temporary tables.


> Changed configuration parameters and Stack trace are as below:
>
> -- Stack trace:
> [centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.1399 postgres
> Reading symbols from
> /home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
> [New LWP 1399]
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `postgres: autovacuum worker   postgres
>  '.
> Program terminated with signal 6, Aborted.
> #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
> krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
> libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
> openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
> zlib-1.2.7-18.el7.x86_64
> (gdb) bt
> #0  0x7f4517d80337 in raise () from /lib64/libc.so.6
> #1  0x7f4517d81a28 in abort () from /lib64/libc.so.6
> #2  0x00a96341 in ExceptionalCondition (conditionName=0xd18efb
> "strvalue != NULL", errorType=0xd18eeb "FailedAssertion",
> fileName=0xd18ee0 "snprintf.c", lineNumber=442) at assert.c:67
> #3  0x00b02522 in dopr (target=0x7ffdb0e38450, format=0xc5fa95
> ".%s\"", args=0x7ffdb0e38538) at snprintf.c:442
> #4  0x00b01ea6 in pg_vsnprintf (str=0x256df50 "autovacuum:
> dropping orphan temp table \"postgres.", '\177' ...,
> count=1024,
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
> args=0x7ffdb0e38538) at snprintf.c:195
> #5  0x00afbadf in pvsnprintf (buf=0x256df50 "autovacuum: dropping
> orphan temp table \"postgres.", '\177' ..., len=1024,
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
> args=0x7ffdb0e38538) at psprintf.c:110
> #6  0x00afd34b in appendStringInfoVA (str=0x7ffdb0e38550,
> fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
> args=0x7ffdb0e38538)
> at stringinfo.c:149
> #7  0x00a970fd in errmsg (fmt=0xc5fa68 "autovacuum: dropping
> orphan temp table \"%s.%s.%s\"") at elog.c:832
> #8  0x008588d2 in do_autovacuum () at autovacuum.c:2249
>

The call stack seems to indicate that the backend from where you were doing
the operations on temporary tables seems to have crashed somehow and then
autovacuum tries to clean up that orphaned temporary table.  And it crashes
while printing the message for dropping orphan tables.  Below is that
message:

ereport(LOG,
(errmsg("autovacuum: dropping orphan temp table \"%s.%s.%s\"",
get_database_name(MyDatabaseId),
get_namespace_name(classForm->relnamespace),
NameStr(classForm->relname;

Now it can fail the assertion only if one of three parameters (database
name, namespace, relname) is NULL which I can't see any way to happen
unless you have manually removed one of namespace or database.

(gdb)
>
> I have tried to reproduce the same with all previously executed queries
> but now I am not able to reproduce the same.
>
>
I am not sure how from this we can conclude if there is any problem with
this patch or otherwise unless you have some steps to show us what you have
done.  It could happen if you somehow corrupt the database by manually
removing stuff or maybe there is some genuine bug, but it is not at all
clear.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Block level parallel vacuum

2019-12-20 Thread Prabhat Sahu
Hi,

While testing this feature with parallel vacuum on "TEMPORARY TABLE", I got
a server crash on PG Head+V36_patch.
Changed configuration parameters and Stack trace are as below:

autovacuum = on
max_worker_processes = 4
shared_buffers = 10MB
max_parallel_workers = 8
max_parallel_maintenance_workers = 8
vacuum_cost_limit = 2000
vacuum_cost_delay = 10
min_parallel_table_scan_size = 8MB
min_parallel_index_scan_size = 0

-- Stack trace:
[centos@parallel-vacuum-testing bin]$ gdb -q -c data/core.1399 postgres
Reading symbols from
/home/centos/BLP_Vacuum/postgresql/inst/bin/postgres...done.
[New LWP 1399]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: autovacuum worker   postgres
   '.
Program terminated with signal 6, Aborted.
#0  0x7f4517d80337 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install
glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64
krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64
libgcc-4.8.5-39.el7.x86_64 libselinux-2.5-14.1.el7.x86_64
openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64
zlib-1.2.7-18.el7.x86_64
(gdb) bt
#0  0x7f4517d80337 in raise () from /lib64/libc.so.6
#1  0x7f4517d81a28 in abort () from /lib64/libc.so.6
#2  0x00a96341 in ExceptionalCondition (conditionName=0xd18efb
"strvalue != NULL", errorType=0xd18eeb "FailedAssertion",
fileName=0xd18ee0 "snprintf.c", lineNumber=442) at assert.c:67
#3  0x00b02522 in dopr (target=0x7ffdb0e38450, format=0xc5fa95
".%s\"", args=0x7ffdb0e38538) at snprintf.c:442
#4  0x00b01ea6 in pg_vsnprintf (str=0x256df50 "autovacuum: dropping
orphan temp table \"postgres.", '\177' ..., count=1024,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538) at snprintf.c:195
#5  0x00afbadf in pvsnprintf (buf=0x256df50 "autovacuum: dropping
orphan temp table \"postgres.", '\177' ..., len=1024,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538) at psprintf.c:110
#6  0x00afd34b in appendStringInfoVA (str=0x7ffdb0e38550,
fmt=0xc5fa68 "autovacuum: dropping orphan temp table \"%s.%s.%s\"",
args=0x7ffdb0e38538)
at stringinfo.c:149
#7  0x00a970fd in errmsg (fmt=0xc5fa68 "autovacuum: dropping orphan
temp table \"%s.%s.%s\"") at elog.c:832
#8  0x008588d2 in do_autovacuum () at autovacuum.c:2249
#9  0x00857b29 in AutoVacWorkerMain (argc=0, argv=0x0) at
autovacuum.c:1689
#10 0x0085772f in StartAutoVacWorker () at autovacuum.c:1483
#11 0x0086e64f in StartAutovacuumWorker () at postmaster.c:5562
#12 0x0086e106 in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5279
#13 
#14 0x7f4517e3f933 in __select_nocancel () from /lib64/libc.so.6
#15 0x00869838 in ServerLoop () at postmaster.c:1691
#16 0x00869212 in PostmasterMain (argc=3, argv=0x256bd70) at
postmaster.c:1400
#17 0x0077855d in main (argc=3, argv=0x256bd70) at main.c:210
(gdb)

I have tried to reproduce the same with all previously executed queries but
now I am not able to reproduce the same.


On Thu, Dec 19, 2019 at 11:26 AM Mahendra Singh  wrote:

> On Wed, 18 Dec 2019 at 12:07, Amit Kapila  wrote:
> >
> > [please trim extra text before responding]
> >
> > On Wed, Dec 18, 2019 at 12:01 PM Mahendra Singh 
> wrote:
> > >
> > > On Tue, 10 Dec 2019 at 00:30, Mahendra Singh 
> wrote:
> > > >
> > > >
> > > > 3.
> > > > After v35 patch, vacuum.sql regression test is taking too much time
> due to large number of inserts so by reducing number of tuples, we can
> reduce that time.
> > > > +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM
> generate_series(1,10) i;
> > > >
> > > > here, instead of 10, we can make 1000 to reduce time of this
> test case because we only want to test code and functionality.
> > >
> > > As we added check of min_parallel_index_scan_size in v36 patch set to
> > > decide parallel vacuum, 1000 tuples are not enough to do parallel
> > > vacuum. I can see that we are not launching any workers in vacuum.sql
> > > test case and hence, code coverage also decreased. I am not sure that
> > > how to fix this.
> > >
> >
> > Try by setting min_parallel_index_scan_size to 0 in test case.
>
> Thanks Amit for the fix.
>
> Yes, we can add "set min_parallel_index_scan_size = 0;" in vacuum.sql
> test case. I tested by setting min_parallel_index_scan_size=0 and it
> is working fine.
>
> @Masahiko san, please add above line in vacuum.sql test case.
>
> Thanks and Regards
> Mahendra Thalor
> EnterpriseDB: http://www.enterprisedb.com
>
>
>

-- 

With Regards,

Prabhat Kumar Sahu
Skype ID: prabhat.sahu1984
EnterpriseDB Software India Pvt. Ltd.

The Postgres Database Company


Re: [HACKERS] Block level parallel vacuum

2019-12-19 Thread Masahiko Sawada
On Thu, 19 Dec 2019 at 22:48, Robert Haas  wrote:
>
> On Thu, Dec 19, 2019 at 12:41 AM Masahiko Sawada
>  wrote:
> > Attached the updated version patch. This version patch incorporates
> > the above comments and the comments from Mahendra. I also fixed one
> > bug around determining the indexes that are vacuumed in parallel based
> > on their option and size. Please review it.
>
> I'm not enthusiastic about the fact that 0003 calls the fast option
> 'disable_delay' in some places. I think it would be more clear to call
> it 'fast' everywhere.
>

Agreed.

I've attached the updated version patch that incorporated the all
review comments I go so far.

Regards,

--
Masahiko Sawadahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


v38-0003-Add-FAST-option-to-vacuum-command.patch
Description: Binary data


v38-0001-Add-index-AM-field-and-callback-for-parallel-ind.patch
Description: Binary data


v38-0002-Add-parallel-option-to-VACUUM-command.patch
Description: Binary data


Re: [HACKERS] Block level parallel vacuum

2019-12-19 Thread Robert Haas
On Thu, Dec 19, 2019 at 12:41 AM Masahiko Sawada
 wrote:
> Attached the updated version patch. This version patch incorporates
> the above comments and the comments from Mahendra. I also fixed one
> bug around determining the indexes that are vacuumed in parallel based
> on their option and size. Please review it.

I'm not enthusiastic about the fact that 0003 calls the fast option
'disable_delay' in some places. I think it would be more clear to call
it 'fast' everywhere.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




Re: [HACKERS] Block level parallel vacuum

2019-12-19 Thread Amit Kapila
On Thu, Dec 19, 2019 at 11:11 AM Masahiko Sawada
 wrote:
>
> On Wed, 18 Dec 2019 at 19:06, Amit Kapila  wrote:
> >
>
> - /* Cap by the worker we computed at the beginning of parallel lazy vacuum */
> - nworkers = Min(nworkers, lps->pcxt->nworkers);
> + /*
> + * The number of workers required for parallel vacuum phase must be less
> + * than the number of workers with which parallel context is initialized.
> + */
> + Assert(lps->pcxt->nworkers >= nworkers);
>
> Regarding the above change in your patch I think we need to cap the
> number of workers by lps->pcxt->nworkers because the computation of
> the number of indexes based on lps->nindexes_paralle_XXX can be larger
> than the number determined when creating the parallel context, for
> example, when max_parallel_maintenance_workers is smaller than the
> number of indexes that can be vacuumed in parallel at bulkdelete
> phase.
>

oh, right, but then probably, you can write a comment as this is not so obvious.

> >
> > Few other comments which I have not fixed:
> >
> > 4.
> > + if (Irel[i]->rd_indam->amusemaintenanceworkmem)
> > + nindexes_mwm++;
> > +
> > + /* Skip indexes that don't participate parallel index vacuum */
> > + if (vacoptions == VACUUM_OPTION_NO_PARALLEL ||
> > + RelationGetNumberOfBlocks(Irel[i]) < min_parallel_index_scan_size)
> > + continue;
> >
> > Won't we need to worry about the number of indexes that uses
> > maintenance_work_mem only for indexes that can participate in a
> > parallel vacuum? If so, the above checks need to be reversed.
>
> You're right. Fixed.
>
> >
> > 5.
> > /*
> > + * Remember indexes that can participate parallel index vacuum and use
> > + * it for index statistics initialization on DSM because the index
> > + * size can get bigger during vacuum.
> > + */
> > + can_parallel_vacuum[i] = true;
> >
> > I am not able to understand the second part of the comment ("because
> > the index size can get bigger during vacuum.").  What is its
> > relevance?
>
> I meant that the indexes can be begger even during vacuum. So we need
> to check the size of indexes and determine participations of parallel
> index vacuum at one place.
>

Okay, but that doesn't go with the earlier part of the comment.  We
can either remove it or explain it a bit more.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com




Re: [HACKERS] Block level parallel vacuum

2019-12-18 Thread Mahendra Singh
On Wed, 18 Dec 2019 at 12:07, Amit Kapila  wrote:
>
> [please trim extra text before responding]
>
> On Wed, Dec 18, 2019 at 12:01 PM Mahendra Singh  wrote:
> >
> > On Tue, 10 Dec 2019 at 00:30, Mahendra Singh  wrote:
> > >
> > >
> > > 3.
> > > After v35 patch, vacuum.sql regression test is taking too much time due 
> > > to large number of inserts so by reducing number of tuples, we can reduce 
> > > that time.
> > > +INSERT INTO pvactst SELECT i, array[1,2,3], point(i, i+1) FROM 
> > > generate_series(1,10) i;
> > >
> > > here, instead of 10, we can make 1000 to reduce time of this test 
> > > case because we only want to test code and functionality.
> >
> > As we added check of min_parallel_index_scan_size in v36 patch set to
> > decide parallel vacuum, 1000 tuples are not enough to do parallel
> > vacuum. I can see that we are not launching any workers in vacuum.sql
> > test case and hence, code coverage also decreased. I am not sure that
> > how to fix this.
> >
>
> Try by setting min_parallel_index_scan_size to 0 in test case.

Thanks Amit for the fix.

Yes, we can add "set min_parallel_index_scan_size = 0;" in vacuum.sql
test case. I tested by setting min_parallel_index_scan_size=0 and it
is working fine.

@Masahiko san, please add above line in vacuum.sql test case.

Thanks and Regards
Mahendra Thalor
EnterpriseDB: http://www.enterprisedb.com




  1   2   3   4   >