Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-25 Thread Vinayak Pokale
On Aug 25, 2017 10:45 PM, "Michael Meskes" <mes...@postgresql.org> wrote:
>
> > The v3 patch looks good to me. I've changed this patch status to Ready
> > for Committer.
>
> Thank you all, committed.

Thank you very much.

Regards,
Vinayak Pokale


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-25 Thread vinayak


On 2017/08/25 17:13, Masahiko Sawada wrote:

On Fri, Aug 25, 2017 at 4:27 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/08/25 16:18, Masahiko Sawada wrote:

On Fri, Aug 25, 2017 at 2:57 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

Hi Sawada-san,


On 2017/08/25 11:07, Masahiko Sawada wrote:

On Fri, Aug 18, 2017 at 5:20 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

On 2017/06/20 17:35, vinayak wrote:

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org>
wrote:

Could you please add a "DO CONTINUE" case to one of the test cases?
Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Please check the attached updated patch.


Thank you for updating.

The regression test failed after applied latest patch by git am.

***
/tmp/pg/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
  2017-08-24 20:01:10.023201132 -0700
---
/tmp/pg/src/interfaces/ecpg/test/results/preproc-whenever_do_continue.c
   2017-08-24 20:22:54.308200853 -0700
***
*** 140,147 
   printf("%s %7.2f %9.2f\n", emp.ename, emp.sal,
emp.comm);
   }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
!proceed if any further errors do occur. */
   /* exec sql whenever sqlerror  continue ; */
 #line 53 "whenever_do_continue.pgc"

--- 140,147 
   printf("%s %7.2f %9.2f\n", emp.ename, emp.sal,
emp.comm);
   }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
!   proceed if any further errors do occur. */
   /* exec sql whenever sqlerror  continue ; */
 #line 53 "whenever_do_continue.pgc"

==

+   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
+   proceed if any further errors do occur. */

I think this comment should obey the coding style guide.

Thank you for testing.

I have updated the patch.
PFA.


Thank you for updating the patch. It seems not to incorporate my
second review comment. Attached an updated patch including a fix of a
comment style in whenever_do_continue.pgc file. Please find an
attached file.

Sorry, I missed it.
Thank you for fixing the comment style.


The v3 patch looks good to me. I've changed this patch status to Ready
for Committer.

Thank you for updating the status in the CF.
We can wait for committers feedback.

Regards,
Vinayak Pokale
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-25 Thread vinayak



On 2017/08/25 16:18, Masahiko Sawada wrote:

On Fri, Aug 25, 2017 at 2:57 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

Hi Sawada-san,


On 2017/08/25 11:07, Masahiko Sawada wrote:

On Fri, Aug 18, 2017 at 5:20 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

On 2017/06/20 17:35, vinayak wrote:

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org>
wrote:

Could you please add a "DO CONTINUE" case to one of the test cases?
Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Please check the attached updated patch.


Thank you for updating.

The regression test failed after applied latest patch by git am.

***
/tmp/pg/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
 2017-08-24 20:01:10.023201132 -0700
---
/tmp/pg/src/interfaces/ecpg/test/results/preproc-whenever_do_continue.c
  2017-08-24 20:22:54.308200853 -0700
***
*** 140,147 
  printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
  }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
!proceed if any further errors do occur. */
  /* exec sql whenever sqlerror  continue ; */
#line 53 "whenever_do_continue.pgc"

--- 140,147 
  printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
  }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
!   proceed if any further errors do occur. */
  /* exec sql whenever sqlerror  continue ; */
#line 53 "whenever_do_continue.pgc"

==

+   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the
program to
+   proceed if any further errors do occur. */

I think this comment should obey the coding style guide.

Thank you for testing.

I have updated the patch.
PFA.


Thank you for updating the patch. It seems not to incorporate my
second review comment. Attached an updated patch including a fix of a
comment style in whenever_do_continue.pgc file. Please find an
attached file.

Sorry, I missed it.
Thank you for fixing the comment style.

Regards,
Vinayak Pokale
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-25 Thread vinayak

Hi Sawada-san,

On 2017/08/25 11:07, Masahiko Sawada wrote:

On Fri, Aug 18, 2017 at 5:20 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

On 2017/06/20 17:35, vinayak wrote:

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org> wrote:

Could you please add a "DO CONTINUE" case to one of the test cases? Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Please check the attached updated patch.


Thank you for updating.

The regression test failed after applied latest patch by git am.

*** /tmp/pg/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
2017-08-24 20:01:10.023201132 -0700
--- /tmp/pg/src/interfaces/ecpg/test/results/preproc-whenever_do_continue.c
 2017-08-24 20:22:54.308200853 -0700
***
*** 140,147 
 printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
 }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!proceed if any further errors do occur. */
 /* exec sql whenever sqlerror  continue ; */
   #line 53 "whenever_do_continue.pgc"

--- 140,147 
 printf("%s %7.2f %9.2f\n", emp.ename, emp.sal, emp.comm);
 }

!   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
!   proceed if any further errors do occur. */
 /* exec sql whenever sqlerror  continue ; */
   #line 53 "whenever_do_continue.pgc"

==

+   /* This 'CONTINUE' shuts off the 'DO CONTINUE' and allow the program to
+   proceed if any further errors do occur. */

I think this comment should obey the coding style guide.

Thank you for testing.

I have updated the patch.
PFA.

Regards,
Vinayak Pokale
NTT Open Source Software Center
>From cd71bf7229a8566cadfde3d0e89b1b445baf1fee Mon Sep 17 00:00:00 2001
From: Vinayak Pokale <vinpok...@gmail.com>
Date: Thu, 22 Jun 2017 11:08:38 +0900
Subject: [PATCH] WHENEVER statement DO CONTINUE support

---
 doc/src/sgml/ecpg.sgml |   12 ++
 src/interfaces/ecpg/preproc/ecpg.trailer   |6 +
 src/interfaces/ecpg/preproc/output.c   |3 +
 src/interfaces/ecpg/test/ecpg_schedule |1 +
 .../test/expected/preproc-whenever_do_continue.c   |  159 
 .../expected/preproc-whenever_do_continue.stderr   |  112 ++
 .../expected/preproc-whenever_do_continue.stdout   |2 +
 src/interfaces/ecpg/test/preproc/Makefile  |1 +
 .../ecpg/test/preproc/whenever_do_continue.pgc |   61 
 9 files changed, 357 insertions(+), 0 deletions(-)
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout
 create mode 100644 src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc

diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
index f13a0e9..3cb4001 100644
--- a/doc/src/sgml/ecpg.sgml
+++ b/doc/src/sgml/ecpg.sgml
@@ -4763,6 +4763,17 @@ EXEC SQL WHENEVER condition action
 
  
+  DO CONTINUE
+  
+   
+Execute the C statement continue.  This should
+only be used in loops statements.  if executed, will cause the flow 
+of control to return to the top of the loop.
+   
+  
+ 
+
+ 
   CALL name (args)
   DO name (args)
   
@@ -7799,6 +7810,7 @@ WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } ac
 
 EXEC SQL WHENEVER NOT FOUND CONTINUE;
 EXEC SQL WHENEVER NOT FOUND DO BREAK;
+EXEC SQL WHENEVER NOT FOUND DO CONTINUE;
 EXEC SQL WHENEVER SQLWARNING SQLPRINT;
 EXEC SQL WHENEVER SQLWARNING DO warn();
 EXEC SQL WHENEVER SQLERROR sqlprint;
diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/prepro

Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-08-18 Thread vinayak


On 2017/06/20 17:35, vinayak wrote:

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org> wrote:

Could you please add a "DO CONTINUE" case to one of the test cases? Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Please check the attached updated patch.

Regards,
Vinayak Pokale
NTT Open Source Software Center
>From cd71bf7229a8566cadfde3d0e89b1b445baf1fee Mon Sep 17 00:00:00 2001
From: Vinayak Pokale <vinpok...@gmail.com>
Date: Thu, 22 Jun 2017 11:08:38 +0900
Subject: [PATCH] WHENEVER statement DO CONTINUE support

---
 doc/src/sgml/ecpg.sgml |   12 ++
 src/interfaces/ecpg/preproc/ecpg.trailer   |6 +
 src/interfaces/ecpg/preproc/output.c   |3 +
 src/interfaces/ecpg/test/ecpg_schedule |1 +
 .../test/expected/preproc-whenever_do_continue.c   |  159 
 .../expected/preproc-whenever_do_continue.stderr   |  112 ++
 .../expected/preproc-whenever_do_continue.stdout   |2 +
 src/interfaces/ecpg/test/preproc/Makefile  |1 +
 .../ecpg/test/preproc/whenever_do_continue.pgc |   61 
 9 files changed, 357 insertions(+), 0 deletions(-)
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stderr
 create mode 100644 src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.stdout
 create mode 100644 src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc

diff --git a/doc/src/sgml/ecpg.sgml b/doc/src/sgml/ecpg.sgml
index f13a0e9..3cb4001 100644
--- a/doc/src/sgml/ecpg.sgml
+++ b/doc/src/sgml/ecpg.sgml
@@ -4763,6 +4763,17 @@ EXEC SQL WHENEVER condition action
 
  
+  DO CONTINUE
+  
+   
+Execute the C statement continue.  This should
+only be used in loops statements.  if executed, will cause the flow 
+of control to return to the top of the loop.
+   
+  
+ 
+
+ 
   CALL name (args)
   DO name (args)
   
@@ -7799,6 +7810,7 @@ WHENEVER { NOT FOUND | SQLERROR | SQLWARNING } ac
 
 EXEC SQL WHENEVER NOT FOUND CONTINUE;
 EXEC SQL WHENEVER NOT FOUND DO BREAK;
+EXEC SQL WHENEVER NOT FOUND DO CONTINUE;
 EXEC SQL WHENEVER SQLWARNING SQLPRINT;
 EXEC SQL WHENEVER SQLWARNING DO warn();
 EXEC SQL WHENEVER SQLERROR sqlprint;
diff --git a/src/interfaces/ecpg/preproc/ecpg.trailer b/src/interfaces/ecpg/preproc/ecpg.trailer
index 1c10879..b42bca4 100644
--- a/src/interfaces/ecpg/preproc/ecpg.trailer
+++ b/src/interfaces/ecpg/preproc/ecpg.trailer
@@ -1454,6 +1454,12 @@ action : CONTINUE_P
 			$$.command = NULL;
 			$$.str = mm_strdup("break");
 		}
+		| DO CONTINUE_P
+		{
+			$$.code = W_CONTINUE;
+			$$.command = NULL;
+			$$.str = mm_strdup("continue");
+		}
 		| SQL_CALL name '(' c_args ')'
 		{
 			$$.code = W_DO;
diff --git a/src/interfaces/ecpg/preproc/output.c b/src/interfaces/ecpg/preproc/output.c
index 59d5d30..14d7066 100644
--- a/src/interfaces/ecpg/preproc/output.c
+++ b/src/interfaces/ecpg/preproc/output.c
@@ -51,6 +51,9 @@ print_action(struct when * w)
 		case W_BREAK:
 			fprintf(base_yyout, "break;");
 			break;
+		case W_CONTINUE:
+			fprintf(base_yyout, "continue;");
+			break;
 		default:
 			fprintf(base_yyout, "{/* %d not implemented yet */}", w->code);
 			break;
diff --git a/src/interfaces/ecpg/test/ecpg_schedule b/src/interfaces/ecpg/test/ecpg_schedule
index c3ec125..cff4eeb 100644
--- a/src/interfaces/ecpg/test/ecpg_schedule
+++ b/src/interfaces/ecpg/test/ecpg_schedule
@@ -28,6 +28,7 @@ test: preproc/type
 test: preproc/variable
 test: preproc/outofscope
 test: preproc/whenever
+test: preproc/whenever_do_continue
 test: sql/array
 test: sql/binary
 test: sql/code100
diff --git a/src/interfaces/ecpg/test/expected/preproc-whenever_do_continue.c b/src/interfaces/ecpg/test/expected/preproc-when

Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-06-20 Thread vinayak

Hi Sawada-san,

On 2017/06/20 17:22, Masahiko Sawada wrote:

On Tue, Jun 20, 2017 at 1:51 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:


On 2017/06/12 13:09, vinayak wrote:

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:

Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org> wrote:

Could you please add a "DO CONTINUE" case to one of the test cases? Or
add a new one? We would need a test case IMO.


Yes I will add test case and send updated patch.

I have added new test case for DO CONTINUE.
Please check the attached patch.

I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/


I got the following warning by git show --check. I think you should
remove unnecessary whitespace. Also the code indent of
whenever_do_continue.pgc seems to need to be adjusted.

$ git show --check
commit a854aa0130589b7bd43b2c6c1c86651be91b1f59
Author: Vinayak Pokale <vinpok...@gmail.com>
Date:   Mon Jun 12 13:03:21 2017 +0900

 WHENEVER statement DO CONTINUE support

src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:16: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:21: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:24: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:27: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:35: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:37: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:39: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:41: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:47: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:49: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:52: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:54: trailing
whitespace.
+
src/interfaces/ecpg/test/preproc/whenever_do_continue.pgc:1: new blank
line at EOF.

--
In whenever_do_continue.pgc file, the following line seems not to be
processed successfully by ecpg but should we fix that?

+
+   exec sql whenever sqlerror continue;
+

Also, you wrote the test case using "WHENEVER sqlerror DO CONTINUE"
action but that seems not to emit sqlerror, so "DO CONTINUE" is not
executed. I think the test case for DO CONTINUE should be a C code
that executes the "continue" clause.

Thank you for testing the patch.
I agreed with your comments. I will update the patch.

Regards,
Vinayak Pokale
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-06-19 Thread vinayak



On 2017/06/12 13:09, vinayak wrote:


Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:


Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org 
<mailto:mes...@postgresql.org>> wrote:

>
> Could you please add a "DO CONTINUE" case to one of the test cases? Or
> add a new one? We would need a test case IMO.
>
Yes I will add test case and send updated patch.


I have added new test case for DO CONTINUE.
Please check the attached patch.


I have added this in Sept. CF
https://commitfest.postgresql.org/14/1173/

Regards,
Vinayak Pokale
NTT Open Source Software Center



Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-06-11 Thread vinayak

Hi,

On 2017/06/10 12:23, Vinayak Pokale wrote:


Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org 
<mailto:mes...@postgresql.org>> wrote:

>
> Could you please add a "DO CONTINUE" case to one of the test cases? Or
> add a new one? We would need a test case IMO.
>
Yes I will add test case and send updated patch.


I have added new test case for DO CONTINUE.
Please check the attached patch.

Regards,
Vinayak Pokale
NTT Open Source Software Center


WHENEVER-statement-DO-CONTINUE-support.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-06-09 Thread Vinayak Pokale
Thank you for your reply

On Jun 9, 2017 5:39 PM, "Michael Meskes" <mes...@postgresql.org> wrote:
>
> Hi,
>
> > To develop the ECPG application more efficiently and improve
> > portability,
> > I would like to suggest one minor improvement "WHENEVER condition DO
> > CONTINUE" support in ECPG.
> > Oracle Pro*C supports WHENEVER statement with DO CONTINUE action.[1]
> >
> > EXEC SQL WHENEVER SQLERROR CONTINUE;
> > is not same as
> > EXEC SQL WHENEVER SQLERROR DO CONTINUE;
> >
> > The CONTINUE action instructs the client application to proceed to
> > the next statement whereas DO CONTINUE action instructs the client
> > application to emit a C continue statement and the flow of control
> > return to the beginning of the enclosing loop.
>
> This did actual escape me. Thanks for bringing it to our attention and
> fixing this missing functionality.
>
> > I have tried to implement it. Please check the attached patch.
> > Please give me feedback.
> > ...
>
> Could you please add a "DO CONTINUE" case to one of the test cases? Or
> add a new one? We would need a test case IMO.
>
Yes I will add test case and send updated patch.

Regards,
Vinayak Pokale


[HACKERS] ECPG: WHENEVER statement with DO CONTINUE action

2017-06-09 Thread vinayak

Hello,

To develop the ECPG application more efficiently and improve portability,
I would like to suggest one minor improvement "WHENEVER condition *DO 
CONTINUE*" support in ECPG.

Oracle Pro*C supports WHENEVER statement with DO CONTINUE action.[1]

EXEC SQL WHENEVER SQLERROR CONTINUE;
is not same as
EXEC SQL WHENEVER SQLERROR DO CONTINUE;

The CONTINUE action instructs the client application to proceed to the 
next statement whereas DO CONTINUE action instructs the client
application to emit a C continue statement and the flow of control 
return to the beginning of the enclosing loop.


I have tried to implement it. Please check the attached patch.

Please give me feedback.


[1]https://docs.oracle.com/cd/B28359_01/appdev.111/b28427/pc_09err.htm#i12340


Regards,

Vinayak Pokale
NTT Open Source Software Center



whenever-do-continue.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ECPG: pg_type.h file is not synced

2017-05-23 Thread vinayak

Hello,

I am looking at ECPG source code. In the "ecpg/ecpglib/pg_type.h" file I 
have seen following comment:


** keep this in sync with src/include/catalog/pg_type.h*

But I think the "ecpg/ecpglib/pg_type.h" file is currently not synced 
with the above file.


I have added the remaining types in the attached patch.

I would like to know whether we can add remaining types in the 
"ecpg/ecpglib/pg_type.h" file or not?


Regards,

Vinayak Pokale

NTT Open Source Software Center



ECPG_pg_type_h.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] New command to monitor progression of long running queries

2017-05-05 Thread Vinayak Pokale
On Mon, Apr 17, 2017 at 9:09 PM, Remi Colinet <remi.coli...@gmail.com>
wrote:

> Hello,
>
> I've implemented a new command named PROGRESS to monitor progression of
> long running SQL queries in a backend process.
>
> Thank you for the patch.

I am testing your patch but after applying your patch other regression test
failed.

$ make installcheck
13 of 178 tests failed.

Regards,
Vinayak


Re: [HACKERS] ANALYZE command progress checker

2017-04-03 Thread vinayak


On 2017/03/30 17:39, Masahiko Sawada wrote:

On Wed, Mar 29, 2017 at 5:38 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

On 2017/03/25 4:30, Robert Haas wrote:

On Fri, Mar 24, 2017 at 3:41 AM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

I have updated the patch.

You can't change the definition of AcquireSampleRowsFunc without
updating the documentation in fdwhandler.sgml, but I think I don't
immediately understand why that's a thing we want to do at all if
neither file_fdw nor postgres_fdw are going to use the additional
argument.  It seems to be entirely pointless because the new value
being passed down is always zero and even if the callee were to update
it, do_analyze_rel() would just ignore the change on return.  Am I
missing something here?  Also, the whitespace-only changes to fdwapi.h
related to AcquireSampleRowsFunc going to get undone by pgindent, so
even if there's some reason to change that you should leave the lines
that don't need changing untouched.

I reviewed v7 patch.

When I ran ANALYZE command to the table having 5 millions rows with 3
child tables, the progress report I got shows the strange result. The
following result was output after sampled all target rows from child
tables (i.g, after finished acquire_inherited_sample_rows). I think
the progress information should report 100% complete at this point.

#= select * from pg_stat_progress_analyze ;
   pid  | datid | datname  | relid |  phase   |
num_target_sample_rows | num_rows_sampled
---+---+--+---+--++--
  81719 | 13215 | postgres | 16440 | collecting inherited sample rows |
300 |  180
(1 row)

I guess the cause of this is that you always count the number of
sampled rows in acquire_sample_rows staring from 0 for each child
table. num_rows_sampled is reset whenever starting collecting sample
rows.
Also, even if table has child table, the total number of sampling row
is not changed. I think that main differences between analyzing on
normal table and on partitioned table is where we fetch the sample row
from. So I'm not sure if adding "computing inherited statistics" phase
is desirable. If you want to report progress of collecting sample rows
on child table I think that you might want to add the information
showing which child table is being analyzed.

--
pg_stat_progress_analyze.num_target_sample_rows is set while ignoring
the number of rows the target table has actually. So If the table has
rows less than target number of rows, the num_rows_sampled never reach
to num_target_sample_rows.

--
@@ -1180,6 +1213,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 }

 samplerows += 1;
+
+   /* Report number of rows sampled so far */
+
pgstat_progress_update_param(PROGRESS_ANALYZE_NUM_ROWS_SAMPLED,
numrows);
 }
 }

I think that it's wrong to use numrows variable to report the progress
of the number of rows sampled. As the following comment mentioned, we
first save row into rows[] and increment numrows until numrows <
targrows. And then we could replace stored sample row with new sampled
row. That is, after collecting "numrows" rows, analyze can still take
a long time to get and replace the sample row. To computing progress
of collecting sample row, IMO reporting the number of blocks we
scanned so far is better than number of sample rows. Thought?


 /*
  * The first targrows sample rows are simply copied into the
  * reservoir. Then we start replacing tuples in the sample
  * until we reach the end of the relation.  This algorithm is
  * from Jeff Vitter's paper (see full citation below). It
  * works by repeatedly computing the number of tuples to skip
  * before selecting a tuple, which replaces a randomly chosen
  * element of the reservoir (current set of tuples).  At all
  * times the reservoir is a true random sample of the tuples
  * we've passed over so far, so when we fall off the end of
  * the relation we're done.
  */

Looks good to me.
In the attached patch I have reported number of blocks scanned so far 
instead of number of sample rows.


In the 'collecting inherited sample rows' phase, child_relid is reported 
as a separate column.
The child_relid is reported its value only in 'collecting inherited 
sample rows' phase. For single relation it return 0.

I am not sure whether this column would helpful or not.
Any suggestions are welcome.

+/* Reset rows processed to zero for the next column */
+
pgstat_progress_update_param(PROGRESS_ANALYZE_NUM_ROWS_PROCESSED,
0);

This seems like a bad design.  If you see a value other than zero in
this field, you still won't know how much progress analyze has made,
because you don't know which column y

Re: [HACKERS] ANALYZE command progress checker

2017-03-30 Thread vinayak


On 2017/03/30 17:39, Masahiko Sawada wrote:

On Wed, Mar 29, 2017 at 5:38 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

On 2017/03/25 4:30, Robert Haas wrote:

On Fri, Mar 24, 2017 at 3:41 AM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

I have updated the patch.

You can't change the definition of AcquireSampleRowsFunc without
updating the documentation in fdwhandler.sgml, but I think I don't
immediately understand why that's a thing we want to do at all if
neither file_fdw nor postgres_fdw are going to use the additional
argument.  It seems to be entirely pointless because the new value
being passed down is always zero and even if the callee were to update
it, do_analyze_rel() would just ignore the change on return.  Am I
missing something here?  Also, the whitespace-only changes to fdwapi.h
related to AcquireSampleRowsFunc going to get undone by pgindent, so
even if there's some reason to change that you should leave the lines
that don't need changing untouched.

I reviewed v7 patch.

Thank you for reviewing the patch.


When I ran ANALYZE command to the table having 5 millions rows with 3
child tables, the progress report I got shows the strange result. The
following result was output after sampled all target rows from child
tables (i.g, after finished acquire_inherited_sample_rows). I think
the progress information should report 100% complete at this point.

#= select * from pg_stat_progress_analyze ;
   pid  | datid | datname  | relid |  phase   |
num_target_sample_rows | num_rows_sampled
---+---+--+---+--++--
  81719 | 13215 | postgres | 16440 | collecting inherited sample rows |
300 |  180
(1 row)

I guess the cause of this is that you always count the number of
sampled rows in acquire_sample_rows staring from 0 for each child
table. num_rows_sampled is reset whenever starting collecting sample
rows.
Also, even if table has child table, the total number of sampling row
is not changed. I think that main differences between analyzing on
normal table and on partitioned table is where we fetch the sample row
from. So I'm not sure if adding "computing inherited statistics" phase
is desirable. If you want to report progress of collecting sample rows
on child table I think that you might want to add the information
showing which child table is being analyzed.

Yes. I think showing child table information would be good to user/DBA.

--
pg_stat_progress_analyze.num_target_sample_rows is set while ignoring
the number of rows the target table has actually. So If the table has
rows less than target number of rows, the num_rows_sampled never reach
to num_target_sample_rows.

--
@@ -1180,6 +1213,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 }

 samplerows += 1;
+
+   /* Report number of rows sampled so far */
+
pgstat_progress_update_param(PROGRESS_ANALYZE_NUM_ROWS_SAMPLED,
numrows);
 }
 }

I think that it's wrong to use numrows variable to report the progress
of the number of rows sampled. As the following comment mentioned, we
first save row into rows[] and increment numrows until numrows <
targrows. And then we could replace stored sample row with new sampled
row. That is, after collecting "numrows" rows, analyze can still take
a long time to get and replace the sample row. To computing progress
of collecting sample row, IMO reporting the number of blocks we
scanned so far is better than number of sample rows. Thought?


 /*
  * The first targrows sample rows are simply copied into the
  * reservoir. Then we start replacing tuples in the sample
  * until we reach the end of the relation.  This algorithm is
  * from Jeff Vitter's paper (see full citation below). It
  * works by repeatedly computing the number of tuples to skip
  * before selecting a tuple, which replaces a randomly chosen
  * element of the reservoir (current set of tuples).  At all
  * times the reservoir is a true random sample of the tuples
  * we've passed over so far, so when we fall off the end of
  * the relation we're done.
  */
I think we can either report number of blocks scanned so far or number 
of sample rows.
But I agree with you that reporting the number of blocks scanned so far 
would be better than reporting number of sample rows.

I Understood that we could not change the definition of
AcquireSampleRowsFunc without updating the documentation in fdwhandler.sgml.
In the last patch I have modified the definition of AcquireSampleRowsFunc to
handle the inheritance case.
If the table being analyzed has one or more children, ANALYZE will gather
statistics twice:
once on the rows of parent table only and second on the rows of the parent
table with all 

Re: [HACKERS] ANALYZE command progress checker

2017-03-29 Thread vinayak


On 2017/03/25 4:30, Robert Haas wrote:

On Fri, Mar 24, 2017 at 3:41 AM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

I have updated the patch.

You can't change the definition of AcquireSampleRowsFunc without
updating the documentation in fdwhandler.sgml, but I think I don't
immediately understand why that's a thing we want to do at all if
neither file_fdw nor postgres_fdw are going to use the additional
argument.  It seems to be entirely pointless because the new value
being passed down is always zero and even if the callee were to update
it, do_analyze_rel() would just ignore the change on return.  Am I
missing something here?  Also, the whitespace-only changes to fdwapi.h
related to AcquireSampleRowsFunc going to get undone by pgindent, so
even if there's some reason to change that you should leave the lines
that don't need changing untouched.
I Understood that we could not change the definition of 
AcquireSampleRowsFunc without updating the documentation in fdwhandler.sgml.
In the last patch I have modified the definition of 
AcquireSampleRowsFunc to handle the inheritance case.
If the table being analyzed has one or more children, ANALYZE will 
gather statistics twice:
once on the rows of parent table only and second on the rows of the 
parent table with all of its children.
So while reporting the progress the value of number of target sample 
rows and number of rows sampled is mismatched.

For single relation it works fine.

In the attached patch I have not change the definition of 
AcquireSampleRowsFunc.

I have updated inheritance case in the the documentation.



+/* Reset rows processed to zero for the next column */
+pgstat_progress_update_param(PROGRESS_ANALYZE_NUM_ROWS_PROCESSED,
0);

This seems like a bad design.  If you see a value other than zero in
this field, you still won't know how much progress analyze has made,
because you don't know which column you're processing.  I also feel
like you're not paying enough attention to a key point here that I
made in my last review, which is that you need to look at where
ANALYZE actually spends most of the time.  If the process of computing
the per-row statistics consumes significant CPU time, then maybe
there's some point in trying to instrument it, but does it?  Unless
you know the answer to that question, you don't know whether there's
even a point to trying to instrument this.

Understood. The computing statistics phase takes long time so I am 
looking at the code.


Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v7.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-24 Thread vinayak


On 2017/03/23 15:04, Haribabu Kommi wrote:



On Wed, Mar 22, 2017 at 8:11 PM, vinayak 
<pokale_vinayak...@lab.ntt.co.jp 
<mailto:pokale_vinayak...@lab.ntt.co.jp>> wrote:



On 2017/03/21 21:25, Haribabu Kommi wrote:



On Tue, Mar 21, 2017 at 3:41 PM, vinayak
<pokale_vinayak...@lab.ntt.co.jp
<mailto:pokale_vinayak...@lab.ntt.co.jp>> wrote:

Thank you for testing the patch on Windows platform.


Thanks for the updated patch.

It works good for a normal relation.  But for a relation that
contains child tables,
the PROGRESS_ANALYZE_NUM_ROWS_SAMPLED produces wrong results.


Thank you for reviewing the patch.
The attached patch implements a way to report sample rows count
from acquire_sample_rows() even if called for child tables.

How about adding another phase called
PROGRESS_ANALYZE_PHASE_COLLECT_INHERIT_SAMPLE_ROWS
and set this phase only when it is an inheritance analyze
operation. And adding
some explanation of ROWS_SAMPLED phase about inheritance tables
how these sampled rows are calculated will provide good analyze
progress of
relation that contains child relations also.

I have added the phase called
PROGRESS_ANALYZE_PHASE_COLLECT_INH_SAMPLE_ROWS.
I have also updated the documentation.


Thanks for the updated patch. I will check it.

The ANALYZE command takes long time in computing statistics
phase.So I think we can add some column or phase so that user can
easily understand the progress.
How about adding new column like "num_rows_processed" will compute
the statistics of specified column?


I prefer a column with rows processed instead of a phase.
Because we already set the phase of compute stats and showing
the progress there will number of rows processed will be good.

How about separate the computing "inheritance statistics" phase
from the computing regular "single table" statistics.
Comment?


Yes, this will be good to show both that states of inheritance of 
sampled rows and
compute inheritance stats, so that it will be clearly visible to the 
user the current

status.

I have updated the patch.
I have added column "num_rows_processed" and phase "computing inherited 
statistics" in the view.


=# \d+ pg_stat_progress_analyze
 View "pg_catalog.pg_stat_progress_analyze"
 Column |  Type   | Collation | Nullable | Default | 
Storage  | Description

+-+---+--+-+--+-
 pid| integer |   |  | | 
plain|
 datid  | oid |   |  | | 
plain|
 datname| name|   |  | | 
plain|
 relid  | oid |   |  | | 
plain|
 phase  | text|   |  | | 
extended |
 num_target_sample_rows | bigint  |   |  | | 
plain|
 num_rows_sampled   | bigint  |   |  | | 
plain|
 num_rows_processed | bigint  |   |  | | 
plain|

View definition:
 SELECT s.pid,
s.datid,
d.datname,
s.relid,
CASE s.param1
WHEN 0 THEN 'initializing'::text
WHEN 1 THEN 'collecting sample rows'::text
WHEN 2 THEN 'collecting inherited sample rows'::text
WHEN 3 THEN 'computing statistics'::text
WHEN 4 THEN 'computing inherited statistics'::text
ELSE NULL::text
END AS phase,
s.param2 AS num_target_sample_rows,
s.param3 AS num_rows_sampled,
s.param4 AS num_rows_processed
   FROM pg_stat_get_progress_info('ANALYZE'::text) s(pid, datid, relid, 
param1, param2, param3, param4, param5, param6, param7, param8, param9, 
param10)

 LEFT JOIN pg_database d ON s.datid = d.oid;

Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v6.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-22 Thread vinayak


On 2017/03/21 21:25, Haribabu Kommi wrote:



On Tue, Mar 21, 2017 at 3:41 PM, vinayak 
<pokale_vinayak...@lab.ntt.co.jp 
<mailto:pokale_vinayak...@lab.ntt.co.jp>> wrote:


Thank you for testing the patch on Windows platform.


Thanks for the updated patch.

It works good for a normal relation.  But for a relation that contains 
child tables,

the PROGRESS_ANALYZE_NUM_ROWS_SAMPLED produces wrong results.


Thank you for reviewing the patch.
The attached patch implements a way to report sample rows count from 
acquire_sample_rows() even if called for child tables.
How about adding another phase called 
PROGRESS_ANALYZE_PHASE_COLLECT_INHERIT_SAMPLE_ROWS
and set this phase only when it is an inheritance analyze operation. 
And adding

some explanation of ROWS_SAMPLED phase about inheritance tables
how these sampled rows are calculated will provide good analyze 
progress of

relation that contains child relations also.
I have added the phase called 
PROGRESS_ANALYZE_PHASE_COLLECT_INH_SAMPLE_ROWS.

I have also updated the documentation.

The ANALYZE command takes long time in computing statistics phase.So I 
think we can add some column or phase so that user can easily understand 
the progress.
How about adding new column like "num_rows_processed" will compute the 
statistics of specified column?
How about separate the computing "inheritance statistics" phase from the 
computing regular "single table" statistics.

Comment?

Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v5.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-20 Thread vinayak

Hi Ashutosh,

On 2017/03/19 17:56, Ashutosh Sharma wrote:

Hi,

I didn't find any major issues with the patch. It works as expected.
However, I have few minor comments that I would like to share,

+ 
+   Total number of sample rows. The sample it reads is taken randomly.
+   Its size depends on the default_statistics_target
parameter value.
+ 

1) I think it would be better if you could specify reference link to
the guc variable 'default_statistics_target'. Something like this,

.

If you go through monitoring.sgml, you would find that there is
reference link to all guc variables being used.

+1. Updated in the attached patch.


2) I feel that the 'computing_statistics' phase could have been
splitted into 'computing_column_stats', 'computing_index_stats'
Please let me know your thoughts on this.

Initially I have spitted this phase as 'computing heap stats' and 
'computing index stats' but
I agreed with Roberts suggestion to merge two phases into one as 
'computing statistics'.

+   certain commands during command execution.  Currently, the command
+   which supports progress reporting is VACUUM and
ANALYZE.  This may be
 expanded in the future.

3) I think above needs to be rephrased. Something like...Currently,
the supported progress reporting commands are 'VACUUM' and
'ANALYZE'.

+1. Updated in the attached patch.

Moreover, I also ran your patch on Windows platform and didn't find
any issues. Thanks.

Thank you for testing the patch on Windows platform.

Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v4.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-20 Thread vinayak

Hi Ashutosh,

Thank you for reviewing the patch.

On 2017/03/18 21:00, Ashutosh Sharma wrote:

Hi Vinayak,

Here are couple of review comments that may need your attention.

1. Firstly, I am seeing some trailing whitespace errors when trying to
apply your v3 patch using git apply command.

[ashu@localhost postgresql]$ git apply pg_stat_progress_analyze_v3.patch
pg_stat_progress_analyze_v3.patch:155: trailing whitespace.
CREATE VIEW pg_stat_progress_analyze AS
pg_stat_progress_analyze_v3.patch:156: trailing whitespace.
 SELECT
pg_stat_progress_analyze_v3.patch:157: trailing whitespace.
 S.pid AS pid, S.datid AS datid, D.datname AS datname,
pg_stat_progress_analyze_v3.patch:158: trailing whitespace.
 S.relid AS relid,
pg_stat_progress_analyze_v3.patch:159: trailing whitespace.
 CASE S.param1 WHEN 0 THEN 'initializing'
error: patch failed: doc/src/sgml/monitoring.sgml:521
I have tried to apply patch using "git apply" and it works fine in my 
environment.

I use below command to apply the patch.
patch -p1 < pg_stat_progress_analyze_v3.patch

2) The test-case 'rules' is failing.  I think it's failing because in
rules.sql 'ORDERBY viewname' is used which will put
'pg_stat_progress_analyze' ahead of 'pg_stat_progress_vacuum' in the
list of catalog views. You may need to correct rules.out file
accordingly. This is what i could see in rules.sql,

SELECT viewname, definition FROM pg_views WHERE schemaname <>
'information_schema' ORDER BY viewname;

I am still reviewing your patch and doing some testing. Will update if
i find any issues. Thanks.

Understood. I have fixed it.

Regards,
Vinayak Pokale
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-17 Thread vinayak


On 2017/03/17 10:38, Robert Haas wrote:

On Fri, Mar 10, 2017 at 2:46 AM, vinayak
<pokale_vinayak...@lab.ntt.co.jp> wrote:

Thank you for reviewing the patch.

The attached patch incorporated Michael and Amit comments also.

I reviewed this tonight.

Thank you for reviewing the patch.

+/* Report compute index stats phase */
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+
PROGRESS_ANALYZE_PHASE_COMPUTE_INDEX_STATS);

Hmm, is there really any point to this?  And is it even accurate?  It
doesn't look to me like we are computing any index statistics here;
we're only allocating a few in-memory data structures here, I think,
which is not a statistics computation and probably doesn't take any
significant time.

+/* Report compute heap stats phase */
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+PROGRESS_ANALYZE_PHASE_COMPUTE_HEAP_STATS);

The phase you label as computing heap statistics also includes the
computation of index statistics.  I wouldn't bother separating the
computation of heap and index statistics; I'd just have a "compute
statistics" phase that begins right after the comment that starts
"Compute the statistics."

Understood. Fixed in the attached patch.


+/* Report that we are now doing index cleanup */
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+PROGRESS_ANALYZE_PHASE_INDEX_CLEANUP);

OK, this doesn't make any sense either.  We are not cleaning up the
indexes here.  We are just closing them and releasing resources.  I
don't see why you need this phase at all; it can't last more than some
tiny fraction of a second.  It seems like you're trying to copy the
exact same phases that exist for vacuum instead of thinking about what
makes sense for ANALYZE.

Understood. I have removed this phase.


+/* Report number of heap blocks scaned so far*/
+pgstat_progress_update_param(PROGRESS_ANALYZE_HEAP_BLKS_SCANNED,
targblock);

While I don't think this is necessarily a bad thing to report, I find
it very surprising that you're not reporting what seems to me like the
most useful thing here - namely, the number of blocks that will be
sampled in total and the number of those that you've selected.
Instead, you're just reporting the length of the relation and the
last-sampled block, which is a less-accurate guide to total progress.
There are enough counters to report both things, so maybe we should.

+/* Report total number of sample rows*/
+pgstat_progress_update_param(PROGRESS_ANALYZE_TOTAL_SAMPLE_ROWS, numrows);

As an alternative to the previous paragraph, yet another thing you
could report is number of rows sampled out of total rows to be
sampled.  But this isn't the way to do it: you are reporting the
number of rows you sampled only once you've finished sampling.  The
point of progress reporting is to report progress -- that is, to
report this value as it's updated, not just to report it when ANALYZE
is practically done and the value has reached its maximum.

Understood.
I have reported number of rows sampled out of total rows to be sampled.
I have not reported the number of blocks that will be sampled in total.
So currect pg_stat_progress_analyze view looks like:

=# \d+ pg_stat_progress_analyze
 View "pg_catalog.pg_stat_progress_analyze"
 Column |  Type   | Collation | Nullable | Default | 
Storage  | Descripti

on
+-+---+--+-+--+--
---
 pid| integer |   |  | | 
plain|
 datid  | oid |   |  | | 
plain|
 datname| name|   |  | | 
plain|
 relid  | oid |   |  | | 
plain|
 phase  | text|   |  | | 
extended |
 num_target_sample_rows | bigint  |   |  | | 
plain|
 num_rows_sampled   | bigint  |   |  | | 
plain|

View definition:
 SELECT s.pid,
s.datid,
d.datname,
s.relid,
CASE s.param1
WHEN 0 THEN 'initializing'::text
WHEN 1 THEN 'collecting sample rows'::text
WHEN 2 THEN 'computing statistics'::text
ELSE NULL::text
END AS phase,
s.param2 AS num_target_sample_rows,
s.param3 AS num_rows_sampled
   FROM pg_stat_get_progress_info('ANALYZE'::text) s(pid, datid, relid, 
param1, param2, p

aram3, param4, param5, param6, param7, param8, param9, param10)
 LEFT JOIN pg_database d ON s.datid = d.oid;

Comment?

The documentation for this patch isn't very good, either.  You forgot
to update the part of the documentation that says that VACUUM is the
only command that currently supports progress reporting, and the
descriptions of the ph

Re: [HACKERS] pg_stat_wal_write statistics view

2017-03-15 Thread vinayak


On 2017/03/16 14:46, Haribabu Kommi wrote:



On Thu, Mar 16, 2017 at 4:15 PM, vinayak 
<pokale_vinayak...@lab.ntt.co.jp 
<mailto:pokale_vinayak...@lab.ntt.co.jp>> wrote:


On 2017/03/16 10:34, Haribabu Kommi wrote:


Updated patch attached.

The patch looks good to me.

Thanks for the review.

How about rename the view as "pg_stat_walwriter"?


With the use of name "walwriter" instead of "walwrites", the
user may confuse that this view is used for displaying
walwriter processes statistics. But actually it is showing
the WAL writes activity in the instance. Because of this
reason, I went with the name of "walwrites".

Understood. Thanks.


The columns of view :
backend_writes -> backend_wal_writes
writes-> background_wal_writes
write_blocks-> wal_write_blocks
write_time->wal_write_time
sync_time->wal_sync_time


As the view name already contains WAL, I am not sure
whether is it required to include WAL in every column?
I am fine to change if others have the same opinion of
adding WAL to column names.


Ok.

Regards,
Vinayak Pokale
NTT Open Source Software Center


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-03-15 Thread Vinayak Pokale
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   tested, passed
Spec compliant:   tested, passed
Documentation:tested, passed

I have tested the latest patch and it looks good to me,
so I marked it "Ready for committer".
Anyway, it would be great if anyone could also have a look at the patches and 
send comments.

The new status of this patch is: Ready for Committer

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_wal_write statistics view

2017-03-15 Thread vinayak


On 2017/03/16 10:34, Haribabu Kommi wrote:


Updated patch attached.

The patch looks good to me.

How about rename the view as "pg_stat_walwriter"?
The columns of view :
backend_writes -> backend_wal_writes
writes-> background_wal_writes
write_blocks-> wal_write_blocks
write_time->wal_write_time
sync_time->wal_sync_time

Regards,
Vinayak Pokale
NTT Open Source Software Center



Re: [HACKERS] ANALYZE command progress checker

2017-03-09 Thread vinayak

Thank you for reviewing the patch.

The attached patch incorporated Michael and Amit comments also.

On 2017/03/07 15:45, Haribabu Kommi wrote:



On Tue, Mar 7, 2017 at 5:01 PM, Michael Paquier 
<michael.paqu...@gmail.com <mailto:michael.paqu...@gmail.com>> wrote:



@@ -496,7 +499,6 @@ do_analyze_rel(Relation onerel, int options,
VacuumParams *params,
numrows = (*acquirefunc) (onerel, elevel,
  rows, targrows,
  , );
-
/*
Useless diff.


Fixed.


+ 
+   ANALYZE is currently collecting the sample rows.
+   The sample it reads is taken randomly.Its size depends on
+   the default_statistics_target parameter value.
+ 
This should use a  markup for default_statistics_target.


Fixed.



@@ -203,6 +204,8 @@ analyze_rel(Oid relid, RangeVar *relation, int
options,
if (onerel->rd_rel->relkind == RELKIND_RELATION ||
onerel->rd_rel->relkind == RELKIND_MATVIEW)
{
+   pgstat_progress_start_command(PROGRESS_COMMAND_ANALYZE,
+  RelationGetRelid(onerel));
It seems to me that the report should begin in do_analyze_rel().


Fixed.


some more comments,

+/* Report compute heap stats phase */
+pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+PROGRESS_ANALYZE_PHASE_COMPUTE_HEAP_STATS);

The above analyze phase is updated inside a for loop, instead just set 
it above once.

Fixed.


+ /* Report compute index stats phase */
+ pgstat_progress_update_param(PROGRESS_ANALYZE_PHASE,
+ PROGRESS_ANALYZE_PHASE_COMPUTE_INDEX_STATS);

Irrespective of whether the index exists on the table or not, the 
above analyze phase
is set. It is better to set the above phase and index cleanup phase 
only when there

are indexes on the table.


Agreed. Fixed.

+/* Report total number of heap blocks and collectinf sample row phase*/

Typo? collecting?


Fixed.


+/* Report total number of heap blocks and collectinf sample row phase*/
+initprog_val[0] = PROGRESS_ANALYZE_PHASE_COLLECT_HEAP_SAMPLE_ROWS;
+initprog_val[1] = totalblocks;
+pgstat_progress_update_multi_param(2, initprog_index, initprog_val);
acquire_sample_rows function is called from acquire_inherited_sample_rows
function, so adding the phase in that function will provide wrong info.


I agree with you.


+#define PROGRESS_ANALYZE_PHASE_COLLECT_INH_SAMPLE_ROWS2

why there is no code added for the phase, any specific reason?

I am thinking how to report this phase. Do you have any suggestion?


+/* Phases of analyze */

Can be written as following for better understanding, and also
similar like vacuum.

/* Phases of analyze (as advertised via PROGRESS_ANALYZE_PHASE) */


Done.

Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v2.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ANALYZE command progress checker

2017-03-06 Thread vinayak


On 2017/03/06 17:02, Amit Langote wrote:

Hi Vinayak,

On 2017/02/28 18:24, vinayak wrote:

The attached patch reports the different phases of analyze command.
Added this patch to CF 2017-03.

In the updated monitoring.sgml:

+
+ computing heap stats
+ 
+   VACUUM is currently computing heap stats.
+ 
+
+
+ computing index stats
+ 
+   VACUUM is currently computing index stats.
+ 
+
+
+ cleaning up indexes
+ 
+   ANALYZE is currently cleaning up indexes.
+ 
+
+   
+   
+  

The entries mentioning VACUUM should actually say ANALYZE.

Yes. Thank you.
I will fix it.

Regards,
Vinayak Pokale
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-03-01 Thread vinayak


On 2017/02/28 16:54, Masahiko Sawada wrote:


I've created a wiki page[1] describing about the design and
functionality of this feature. Also it has some examples of use case,
so this page would be helpful for even testing. Please refer it if
you're interested in testing this feature.

[1] 2PC on FDW
<https://wiki.postgresql.org/wiki/2PC_on_FDW>

Thank you for creating the wiki page.

In the "src/test/regress/pg_regress.c" file
-* xacts.  (Note: to reduce the probability of 
unexpected shmmax
-* failures, don't set max_prepared_transactions any 
higher than

-* actually needed by the prepared_xacts regression test.)
+* xacts. We also set *max_fdw_transctions* to enable 
testing of atomic
+* foreign transactions. (Note: to reduce the 
probability of unexpected

+* shmmax failures, don't set max_prepared_transactions or
+* max_prepared_foreign_transactions any higher than 
actually needed by the

+* corresponding regression tests.).

I think we are not setting the "*max_fdw_transctions" *anywhere.
Is this correct?

In the "src/bin/pg_waldump/rmgrdesc.c" file following header file used 
two times.

+ #include "access/fdw_xact.h"
I think we need to remove one line.

Regards,
Vinayak Pokale



[HACKERS] ANALYZE command progress checker

2017-02-28 Thread vinayak

Hello Hackers,

Following is a proposal for reporting the progress of ANALYZE command:

It seems that the following could be the phases of ANALYZE processing:
1. Collecting sample rows
2. Collecting inherited sample rows
3. Computing heap stats
4. Computing index stats
5. Cleaning up indexes

The first phase is easy if there is no inheritance but in case of 
inheritance we need to sample the blocks from multiple heaps.

Here the progress is counted against total number of blocks processed.

The view provides the information of analyze command progress details as 
follows

postgres=# \d pg_stat_progress_analyze
  View "pg_catalog.pg_stat_progress_analyze"
  Column   |  Type   | Collation | Nullable | Default
---+-+---+--+-
 pid   | integer |   |  |
 datid | oid |   |  |
 datname   | name|   |  |
 relid | oid |   |  |
 phase | text|   |  |
 heap_blks_total   | bigint  |   |  |
 heap_blks_scanned | bigint  |   |  |
 total_sample_rows | bigint  |   |  |

I feel this view information may be useful in checking the progress of 
long running ANALYZE command.



The attached patch reports the different phases of analyze command.
Added this patch to CF 2017-03.

Opinions?

Note: Collecting inherited sample rows phase is not reported yet in the 
patch.


Regards,
Vinayak Pokale
NTT Open Source Software Center


pg_stat_progress_analyze_v1.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Postgres_fdw behaves oddly

2017-02-03 Thread vinayak
29:18.857 JST [3081] DETAIL:  Key (val)=(1) already exists.
2017-02-03 15:29:18.857 JST [3081] STATEMENT:  COMMIT TRANSACTION
2017-02-03 15:29:18.858 JST [3075] ERROR:  duplicate key value violates 
unique constraint "lt_val_key"

2017-02-03 15:29:18.858 JST [3075] DETAIL:  Key (val)=(1) already exists.
2017-02-03 15:29:18.858 JST [3075] CONTEXT:  Remote SQL command: COMMIT 
TRANSACTION

2017-02-03 15:29:18.858 JST [3075] STATEMENT:  COMMIT;
2017-02-03 15:29:18.858 JST [3081] WARNING:  there is no transaction in 
progress

WARNING:  there is no transaction in progress
ERROR:  duplicate key value violates unique constraint "lt_val_key"
DETAIL:  Key (val)=(1) already exists.
CONTEXT:  Remote SQL command: COMMIT TRANSACTION
postgres=#
postgres=# SELECT * FROM lt;
 val
-
   1
   3
   4
(3 rows)
*Test 5:**
**===*
In a transaction insert two rows one each to the two foreign tables.
Both the rows violates the constraint. Here error is expected at COMMIT 
time but transaction does not give any error and it takes lock waiting 
for a transaction to finish.

postgres=# BEGIN;
BEGIN
postgres=# INSERT INTO ft1_lt VALUES *(3)*; -- Violates constraint
INSERT 0 1
postgres=# INSERT INTO ft2_lt VALUES *(3)*; -- Violates constraint
INSERT 0 1
postgres=# COMMIT;
.
.
.

postgres=# select datid,datname,pid,wait_event_type,wait_event,query 
from pg_stat_activity;
-[ RECORD 1 
]---+-

datid   | 13123
datname | postgres
pid | 3654
wait_event_type | *Lock*
wait_event  | *transactionid*
query   | COMMIT TRANSACTION

Note: Test 4 and Test 5 are same but in Test 5 both the foreign servers 
trying to insert the same data.


Is this a expected behavior of postgres_fdw?

Regards,
Vinayak Pokale

NTT Open Source Software Center



Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-01-29 Thread vinayak


On 2017/01/29 0:11, Peter Eisentraut wrote:

On 1/26/17 4:49 AM, Masahiko Sawada wrote:

Sorry, I attached wrong version patch of pg_fdw_xact_resovler. Please
use attached patch.

So in some other thread we are talking about renaming "xlog", because
nobody knows what the "x" means.  In the spirit of that, let's find
better names for new functions as well.

+1

Regards,
Vinayak Pokale
NTT Open Source Software Center



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-01-26 Thread vinayak

Hi Sawada-san,

On 2017/01/26 16:51, Masahiko Sawada wrote:

Thank you for reviewing!

I think this is a bug of pg_fdw_resolver contrib module. I had
forgotten to change the SQL executed by pg_fdw_resolver process.
Attached latest version 002 patch.


As previous version patch conflicts to current HEAD, attached updated
version patches. Also I fixed some bugs in pg_fdw_xact_resolver and
added some documentations.
Please review it.

Thank you updating the patches.

I have applied patches on Postgres HEAD.
I have created the postgres=fdw extension in PostgreSQL and then I got 
segmentation fault.*

**Details:*
=# 2017-01-26 17:52:56.156 JST [3411] LOG:  worker process: foreign 
transaction resolver launcher (PID 3418) was terminated by signal 11: 
*Segmentation fault*
2017-01-26 17:52:56.156 JST [3411] LOG:  terminating any other active 
server processes
2017-01-26 17:52:56.156 JST [3425] WARNING:  terminating connection 
because of crash of another server process
2017-01-26 17:52:56.156 JST [3425] DETAIL:  The postmaster has commanded 
this server process to roll back the current transaction and exit, 
because another server process exited abnormally and possibly corrupted 
shared memory.
2017-01-26 17:52:56.156 JST [3425] HINT:  In a moment you should be able 
to reconnect to the database and repeat your command.


Is this a bug?

Regards,
Vinayak Pokale
NTT Open Source Software Center


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2017-01-19 Thread vinayak


On 2017/01/18 14:15, Haribabu Kommi wrote:



On Sat, Jan 14, 2017 at 2:58 AM, Dilip Kumar <dilipbal...@gmail.com 
<mailto:dilipbal...@gmail.com>> wrote:


On Wed, Jan 11, 2017 at 7:47 PM, Dilip Kumar
<dilipbal...@gmail.com <mailto:dilipbal...@gmail.com>> wrote:
> +void
> +pgstat_count_sqlstmt(const char *commandTag)
> +{
> + PgStat_SqlstmtEntry *htabent;
> + bool found;
> +
> + if (!pgstat_track_sql)
> + return
>
> Callers of pgstat_count_sqlstmt are always ensuring that
> pgstat_track_sql is true, seems it's redundant here.

I have done testing of the feature, it's mostly working as per the
expectation.


Thanks for the review and test.

The use case for this view is to find out the number of different SQL 
statements
that are getting executed successfully on an instance by the 
application/user.


I have few comments/questions.


If you execute the query with EXECUTE then commandTag will be EXECUTE
that way we will not show the actual query type, I mean all the
statements will get the common tag "EXECUTE".

You can try this.
PREPARE fooplan (int) AS INSERT INTO t VALUES($1);
EXECUTE fooplan(1);

--


Yes, that's correct. Currently the count is incremented based on the 
base command,
because of this reason, the EXECUTE is increased, instead of the 
actual operation.


+ /* Destroy the SQL statement counter stats HashTable */
+ hash_destroy(pgstat_sql);
+
+ /* Create SQL statement counter Stats HashTable */

I think in the patch we have used mixed of "Stats HashTable" and
"stats HashTable", I think better
to be consistent throughout the patch. Check other similar instances.




Corrected.


@@ -1102,6 +1102,12 @@ exec_simple_query(const char *query_string)

  PortalDrop(portal, false);

+ /*
+ * Count SQL statement for pg_stat_sql view
+ */
+ if (pgstat_track_sql)
+ pgstat_count_sqlstmt(commandTag);

We are only counting the successful SQL statement, Is that
intentional?


Yes, I thought of counting the successful SQL operations that changed the
database over a period of time. I am not sure whether there will be many
failure operations that can occur on an instance to be counted.

--
I have noticed that pgstat_count_sqlstmt is called from
exec_simple_query and exec_execute_message. Don't we want to track the
statement executed from functions?
---

The view is currently designed to count user/application initiated SQL 
statements,
but not the internal SQL queries that are getting executed from 
functions and etc.


>>+void
>>+pgstat_count_sqlstmt(const char *commandTag)
>>+{
>>+ PgStat_SqlstmtEntry *htabent;
>>+ bool found;
>>+
>>+ if (!pgstat_track_sql)
>>+ return
>
>Callers of pgstat_count_sqlstmt are always ensuring that
>pgstat_track_sql is true, seems it's redundant here.

Removed the check in pgstat_count_sqlstmt statement and add it 
exec_execute_message

function where the check is missed.

Updated patch attached.

I have reviewed the patch. All the test cases works fine.

Regards,
Vinayak Pokale
NTT Open Source Software Center


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-01-18 Thread vinayak


On 2017/01/16 17:35, Masahiko Sawada wrote:

On Fri, Jan 13, 2017 at 3:48 PM, Masahiko Sawada <sawada.m...@gmail.com> wrote:

On Fri, Jan 13, 2017 at 3:20 PM, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com> wrote:

Long time passed since original patch proposed by Ashutosh, so I
explain again about current design and functionality of this feature.
If you have any question, please feel free to ask.

Thanks for the summary.


Parameters
==

[ snip ]


Cluster-wide atomic commit
===
Since the distributed transaction commit on foreign servers are
executed independently, the transaction that modified data on the
multiple foreign servers is not ensured that transaction did either
all of them commit or all of them rollback. The patch adds the
functionality that guarantees distributed transaction did either
commit or rollback on all foreign servers. IOW the goal of this patch
is achieving the cluster-wide atomic commit across foreign server that
is capable two phase commit protocol.

In [1], I proposed that we solve the problem of supporting PREPARED
transactions involving foreign servers and in subsequent mail Vinayak
agreed to that. But this goal has wider scope than that proposal. I am
fine widening the scope, but then it would again lead to the same
discussion we had about the big picture. May be you want to share
design (or point out the parts of this design that will help) for
solving smaller problem and tone down the patch for the same.


Sorry for confuse you. I'm still focusing on solving only that
problem. What I was trying to say is that I think that supporting
PREPARED transaction involving foreign server is the means, not the
end. So once we supports PREPARED transaction involving foreign
servers we can achieve cluster-wide atomic commit in a sense.


Attached updated patches. I fixed some bugs and add 003 patch that
adds TAP test for foreign transaction.
003 patch depends 000 and 001 patch.

Please give me feedback.


I have tested prepared transactions with foreign servers but after 
preparing the transaction

the following error occur infinitely.
Test:
=
=#BEGIN;
=#INSERT INTO ft1_lt VALUES (10);
=#INSERT INTO ft2_lt VALUES (20);
=#PREPARE TRANSACTION 'prep_xact_with_fdw';

2017-01-18 15:09:48.378 JST [4312] ERROR:  function pg_fdw_resolve() 
does not exist at character 8
2017-01-18 15:09:48.378 JST [4312] HINT:  No function matches the given 
name and argument types. You might need to add explicit type casts.

2017-01-18 15:09:48.378 JST [4312] QUERY:  SELECT pg_fdw_resolve()
2017-01-18 15:09:48.378 JST [29224] LOG:  worker process: foreign 
transaction resolver (dbid 13119) (PID 4312) exited with exit code 1

.

If we check the status on another session then it showing the status as 
prepared.

=# select * from pg_fdw_xacts;
 dbid  | transaction | serverid | userid |  status  | identifier
---+-+--++--+ 

 13119 |1688 |16388 | 10 | prepared | 
px_2102366504_16388_10
 13119 |1688 |16389 | 10 | prepared | 
px_749056984_16389_10

(2 rows)

I think this is a bug.

Regards,
Vinayak Pokale
NTT Open Source Software Center



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-12-08 Thread vinayak

On 2016/12/05 14:42, Ashutosh Bapat wrote:

On Mon, Dec 5, 2016 at 11:04 AM, Haribabu Kommi
<kommi.harib...@gmail.com> wrote:


On Fri, Nov 11, 2016 at 5:38 PM, Masahiko Sawada <sawada.m...@gmail.com>
wrote:


2PC is a basic building block to support the atomic commit and there
are some optimizations way in order to reduce disadvantage of 2PC. As
you mentioned, it's hard to support a single model that would suit
several type of FDWs. But even if it's not a purpose for sharding,
because many other database which could be connected to PostgreSQL via
FDW supports 2PC, 2PC for FDW would be useful for not only sharding
purpose. That's why I was focusing on implementing 2PC for FDW so far.


Moved to next CF with "needs review" status.

I think this should be changed to "returned with feedback.". The
design and approach itself needs to be discussed. I think, we should
let authors decide whether they want it to be added to the next
commitfest or not.

When I first started with this work, Tom had suggested me to try to
make PREPARE and COMMIT/ROLLBACK PREPARED involving foreign servers or
at least postgres_fdw servers work. I think, most of my work that
Vinayak and Sawada have rebased to the latest master will be required
for getting what Tom suggested done. We wouldn't need a lot of changes
to that design. PREPARE involving foreign servers errors out right
now. If we start supporting prepared transactions involving foreign
servers that will be a good improvement over the current status-quo.
Once we get that done, we can continue working on the larger problem
of supporting ACID transactions involving foreign servers.
In the pgconf ASIA depelopers meeting Bruce Momjian and other developers 
discussed
on FDW based sharding [1]. The suggestions from other hackers was that 
we need to discuss
the big picture and use cases of sharding. Bruce has listed all the 
building blocks of built-in sharding
on wiki [2]. IIUC,transaction manager involving foreign servers is one 
part of sharding.
As per the Bruce's wiki page there are two use cases for transactions 
involved multiple foreign servers:

1. Cross-node read-only queries on read/write shards:
This will require a global snapshot manager to make sure the shards 
return consistent data.

2. Cross-node read-write queries:
This will require a global snapshot manager and global transaction 
manager.


I agree with you that if we start supporting PREPARE and COMMIT/ROLLBACK 
PREPARED

involving foreign servers that will be good improvement.

[1] https://wiki.postgresql.org/wiki/PgConf.Asia_2016_Developer_Meeting
[2] https://wiki.postgresql.org/wiki/Built-in_Sharding

Regards,
Vinayak Pokale
NTT Opern Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Minor typo in reorderbuffer.c

2016-10-25 Thread vinayak

Hi,

Attached patch fixes a typo in reorderbuffer.c

s/messsage/message/g

Regards,

Vinayak Pokale

NTT Open Source Software Center


typo-reorderbuffer-c.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in pgstat.c

2016-10-20 Thread vinayak

Hi,

Attached patch fixes a typo in pgstat.c

s/addtions/additions/g

Regards,

Vinayak Pokale

NTT Open Source Software Center



typo-pgstat-c.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in pgstat.h

2016-10-20 Thread vinayak

Hi,


- * st_progress_command_target, and st_progress_command[].

+ * st_progress_command_target, and st_progress_param[].


Attached patch fixed typo.


Regards,

Vinayak Pokale

NTT Open Source Software Center




typo-pgstat-h.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2016-10-16 Thread vinayak


On 2016/10/17 10:22, Haribabu Kommi wrote:



On Fri, Oct 14, 2016 at 7:48 PM, vinayak 
<pokale_vinayak...@lab.ntt.co.jp 
<mailto:pokale_vinayak...@lab.ntt.co.jp>> wrote:



On 2016/10/12 12:21, Haribabu Kommi wrote:

I tried changing the pg_stat_sql into row mode and ran the
regress suite to
add different type of SQL commands to the view and ran the
pgbench test
on my laptop to find out any performance impact with this patch.

HEAD  PATCH
pgbench - select  828  816

Here I attached the pg_stat_sql patch to keep track of all SQL
commands
based on the commandTag and their counts. I attached the result
of this
view that is run on the database after "make installcheck" just
for reference.

Some comments:
I think we can use pgstat_* instead of pgstat* for code consistency.

+static HTAB *pgStatBackendSql = NULL;
How about *pgstat_backend_sql

+HTAB   *pgStatSql = NULL;
How about *pgstat_sql


Changed.

+static void pgstat_recv_sqlstmt(PgStat_MsgSqlstmt * msg, int len);
How about PgStat_MsgSqlstmt *msg instead of PgStat_MsgSqlstmt * msg


Added the typdef into typdef.list file so this problem never occurs.

+typedef struct PgStatSqlstmtEntry
How about PgStat_SqlstmtEntry

+typedef struct PgStatMsgSqlstmt
How about PgStat_MsgSqlstmt



 changed.

Updated patch is attached.

The updated patch gives following error.
Error:
../../../../src/include/pgstat.h:557: error: ‘PgStatSqlstmtEntry’ 
undeclared here (not in a function)


-((PGSTAT_MSG_PAYLOAD - sizeof(int)) / sizeof(PgStatSqlstmtEntry))
+((PGSTAT_MSG_PAYLOAD - sizeof(int)) / sizeof(PgStat_SqlstmtEntry))
The attached patch fixed the error.

Regards,
Vinayak Pokale


pg_stat_sql_row_mode_2.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2016-10-14 Thread vinayak


On 2016/10/12 12:21, Haribabu Kommi wrote:



On Thu, Sep 29, 2016 at 3:45 PM, Haribabu Kommi 
<kommi.harib...@gmail.com <mailto:kommi.harib...@gmail.com>> wrote:




On Thu, Sep 22, 2016 at 3:05 AM, Alvaro Herrera
<alvhe...@2ndquadrant.com <mailto:alvhe...@2ndquadrant.com>> wrote:

Peter Eisentraut wrote:

> How about having the tag not be a column name but a row
entry.  So you'd
> do something like
>
> SELECT * FROM pg_stat_sql WHERE tag = 'ALTER VIEW';
>
> That way, we don't have to keep updating (and re-debating)
this when new
> command types or subtypes are added.  And queries written
for future
> versions will not fail when run against old servers.

Yeah, good idea.


Yes, Having it as a row entry is good.

Let's also discuss the interface from the stats collector. 
Currently we

have some 20 new SQL functions, all alike, each loading the
whole data
and returning a single counter, and then the view invokes each
function
separately.  That doesn't seem great to me.  How about having
a single C
function that returns the whole thing as a SRF instead, and
the view is
just a single function invocation -- something like pg_lock_status
filling pg_locks in one go.

Another consideration is that the present patch lumps together
all ALTER
cases in a single counter.  This isn't great, but at the same
time we
don't want to bloat the stat files by having hundreds of
counters per
database, do we?


Currently, The SQL stats is a fixed size counter to track the all
the ALTER
cases as single counter. So while sending the stats from the
backend to
stats collector at the end of the transaction, the cost is same,
because of
it's fixed size. This approach adds overhead to send and read the
stats
is minimal.

With the following approach, I feel it is possible to support the
counter at
command tag level.

Add a Global and local Hash to keep track of the counters by using the
command tag as the key, this hash table increases dynamically whenever
a new type of SQL command gets executed. The Local Hash data is passed
to stats collector whenever the transaction gets committed.

The problem I am thinking is that, Sending data from Hash and
populating
the Hash from stats file for all the command tags adds some overhead.



I tried changing the pg_stat_sql into row mode and ran the regress 
suite to

add different type of SQL commands to the view and ran the pgbench test
on my laptop to find out any performance impact with this patch.

HEAD  PATCH
pgbench - select  828  816

Here I attached the pg_stat_sql patch to keep track of all SQL commands
based on the commandTag and their counts. I attached the result of this
view that is run on the database after "make installcheck" just for 
reference.

Some comments:
I think we can use pgstat_* instead of pgstat* for code consistency.

+static HTAB *pgStatBackendSql = NULL;
How about *pgstat_backend_sql

+HTAB   *pgStatSql = NULL;
How about *pgstat_sql

+static void pgstat_recv_sqlstmt(PgStat_MsgSqlstmt * msg, int len);
How about PgStat_MsgSqlstmt *msg instead of PgStat_MsgSqlstmt * msg

+typedef struct PgStatSqlstmtEntry
How about PgStat_SqlstmtEntry

+typedef struct PgStatMsgSqlstmt
How about PgStat_MsgSqlstmt

I have observed below behavior.
I have SET track_sql to ON and then execute the SELECT command and it 
return 0 rows.

It start counting from the second command.
postgres=# SET track_sql TO ON;
SET
postgres=# SELECT * FROM pg_stat_sql where tag='SELECT';
 tag | count
-+---
(0 rows)

postgres=# SELECT * FROM pg_stat_sql where tag='SELECT';
  tag   | count
+---
 SELECT |     1
(1 row)
Is this a correct behavior?

Regards,
Vinayak Pokale
NTT Open Source Software Center




Re: [HACKERS] New SQL counter statistics view (pg_stat_sql)

2016-10-11 Thread vinayak



On 2016/10/12 12:21, Haribabu Kommi wrote:



On Thu, Sep 29, 2016 at 3:45 PM, Haribabu Kommi 
<kommi.harib...@gmail.com <mailto:kommi.harib...@gmail.com>> wrote:




On Thu, Sep 22, 2016 at 3:05 AM, Alvaro Herrera
<alvhe...@2ndquadrant.com <mailto:alvhe...@2ndquadrant.com>> wrote:

Peter Eisentraut wrote:

> How about having the tag not be a column name but a row
entry.  So you'd
> do something like
>
> SELECT * FROM pg_stat_sql WHERE tag = 'ALTER VIEW';
>
> That way, we don't have to keep updating (and re-debating)
this when new
> command types or subtypes are added.  And queries written
for future
> versions will not fail when run against old servers.

Yeah, good idea.


Yes, Having it as a row entry is good.

Let's also discuss the interface from the stats collector. 
Currently we

have some 20 new SQL functions, all alike, each loading the
whole data
and returning a single counter, and then the view invokes each
function
separately.  That doesn't seem great to me.  How about having
a single C
function that returns the whole thing as a SRF instead, and
the view is
just a single function invocation -- something like pg_lock_status
filling pg_locks in one go.

Another consideration is that the present patch lumps together
all ALTER
cases in a single counter.  This isn't great, but at the same
time we
don't want to bloat the stat files by having hundreds of
counters per
database, do we?


Currently, The SQL stats is a fixed size counter to track the all
the ALTER
cases as single counter. So while sending the stats from the
backend to
stats collector at the end of the transaction, the cost is same,
because of
it's fixed size. This approach adds overhead to send and read the
stats
is minimal.

With the following approach, I feel it is possible to support the
counter at
command tag level.

Add a Global and local Hash to keep track of the counters by using the
command tag as the key, this hash table increases dynamically whenever
a new type of SQL command gets executed. The Local Hash data is passed
to stats collector whenever the transaction gets committed.

The problem I am thinking is that, Sending data from Hash and
populating
the Hash from stats file for all the command tags adds some overhead.



I tried changing the pg_stat_sql into row mode and ran the regress 
suite to

add different type of SQL commands to the view and ran the pgbench test
on my laptop to find out any performance impact with this patch.

HEAD  PATCH
pgbench - select  828  816

Here I attached the pg_stat_sql patch to keep track of all SQL commands
based on the commandTag and their counts. I attached the result of this
view that is run on the database after "make installcheck" just for 
reference.



Thank you for the patch.

Test: Commands with uppercase and lowercase

If the tag='select' then it returns the 0 rows but count is actually 
increment by 1.


tag='select' vs tag='SELECT'

postgres=# SET track_sql TO ON;
SET
postgres=# SELECT * FROM pg_stat_sql where tag='SELECT';
  tag   | count
+---
 SELECT |12
(1 row)

postgres=# SELECT * FROM pg_stat_sql where tag=*'SELECT'*;
  tag   | count
+---
* SELECT |13*
(1 row)

postgres=# SELECT * FROM pg_stat_sql where tag=*'select'*;
 tag | count
-+---
*(0 rows)*

postgres=# SELECT * FROM pg_stat_sql where tag=*'SELECT'*;
  tag   | count
+---
* SELECT |15*
(1 row)

I think all command works same as above.

Regards,
Vinayak Pokale
NTT Open Source Software Center


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-09-26 Thread vinayak


On 2016/09/07 10:54, vinayak wrote:


Thanks for the clarification. I had added pg_fdw_xact_resolver() to 
resolve any transactions which can not be resolved immediately after 
they were prepared. There was a comment from Kevin (IIRC) that 
leaving transactions unresolved on the foreign server keeps the 
resources locked on those servers. That's not a very good situation. 
And nobody but the initiating server can resolve those. That 
functionality is important to make it a complete 2PC solution. So, 
please consider it to be included in your first set of patches.

The attached patch included pg_fdw_xact_resolver.


The attached patch includes the documentation.

Regards,
Vinayak Pokale
NTT Open Source Software Center


0001-Support-transaction-with-foreign-servers.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-09-06 Thread vinayak



On 2016/08/26 15:13, Ashutosh Bapat wrote:



On Fri, Aug 26, 2016 at 11:37 AM, Masahiko Sawada 
<sawada.m...@gmail.com <mailto:sawada.m...@gmail.com>> wrote:


On Fri, Aug 26, 2016 at 3:03 PM, Ashutosh Bapat
<ashutosh.ba...@enterprisedb.com
<mailto:ashutosh.ba...@enterprisedb.com>> wrote:
>
>
> On Fri, Aug 26, 2016 at 11:22 AM, Masahiko Sawada
<sawada.m...@gmail.com <mailto:sawada.m...@gmail.com>>
> wrote:
>>
>> On Fri, Aug 26, 2016 at 1:32 PM, Vinayak Pokale
<vinpok...@gmail.com <mailto:vinpok...@gmail.com>>
>> wrote:
>> > Hi All,
>> >
>> > Ashutosh proposed the feature 2PC for FDW for achieving
atomic commits
>> > across multiple foreign servers.
>> > If a transaction make changes to more than two foreign
servers the
>> > current
>> > implementation in postgres_fdw doesn't make sure that either
all of them
>> > commit or all of them rollback their changes.
>> >
>> > We (Masahiko Sawada and me) reopen this thread and trying to
contribute
>> > in
>> > it.
>> >
>> > 2PC for FDW
>> > 
>> > The patch provides support for atomic commit for transactions
involving
>> > foreign servers. when the transaction makes changes to
foreign servers,
>> > either all the changes to all the foreign servers commit or
rollback.
>> >
>> > The new patch 2PC for FDW include the following things:
>> > 1. The patch 0001 introduces a generic feature. All kinds of
FDW that
>> > support 2PC such as oracle_fdw, mysql_fdw, postgres_fdw etc.
can involve
>> > in
>> > the transaction.
>> >
>> > Currently we can push some conditions down to shard nodes,
especially in
>> > 9.6
>> > the directly modify feature has
>> > been introduced. But such a transaction modifying data on
shard node is
>> > not
>> > executed surely.
>> > Using 0002 patch, that modify is executed with 2PC. It means
that we
>> > almost
>> > can provide sharding solution using
>> > multiple PostgreSQL server (one parent node and several
shared node).
>> >
>> > For multi master, we definitely need transaction manager but
transaction
>> > manager probably can use this 2PC for FDW feature to manage
distributed
>> > transaction.
>> >
>> > 2. 0002 patch makes postgres_fdw possible to use 2PC.
>> >
>> > 0002 patch makes postgres_fdw to use below APIs. These APIs
are generic
>> > features which can be used by all kinds of FDWs.
>> >
>> > a. Execute PREAPRE TRANSACTION and COMMIT/ABORT PREAPRED
instead of
>> > COMMIT/ABORT on foreign server which supports 2PC.
>> > b. Manage information of foreign prepared transactions
resolver
>> >
>> > Masahiko Sawada will post the patch.
>> >
>> >
>>
>
> Thanks Vinayak and Sawada-san for taking this forward and basing
your work
> on my patch.
>
>>
>> Still lot of work to do but attached latest patches.
>> These are based on the patch Ashutosh posted before, I revised
it and
>> divided into two patches.
>> Compare with original patch, patch of pg_fdw_xact_resolver and
>> documentation are lacked.
>
>
> I am not able to understand the last statement.

Sorry to confuse you.

> Do you mean to say that your patches do not have
pg_fdw_xact_resolver() and
> documentation that my patches had?

Yes.
I'm confirming them that your patches had.


Thanks for the clarification. I had added pg_fdw_xact_resolver() to 
resolve any transactions which can not be resolved immediately after 
they were prepared. There was a comment from Kevin (IIRC) that leaving 
transactions unresolved on the foreign server keeps the resources 
locked on those servers. That's not a very good situation. And nobody 
but the initiating server can resolve those. That functionality is 
important to make it a complete 2PC solution. So, please consider it 
to be included in your first set of patches.

The attached patch included pg_fdw_xact_resolver.

Regards,
Vinayak Pokale
NTT Open Source Software Center



0003-pg-fdw-xact-resolver.patch
Description: application/download

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2016-08-25 Thread Vinayak Pokale
Hi All,

Ashutosh proposed the feature 2PC for FDW for achieving atomic commits
across multiple foreign servers.
If a transaction make changes to more than two foreign servers the current
implementation in postgres_fdw doesn't make sure that either all of them
commit or all of them rollback their changes.

We (Masahiko Sawada and me) reopen this thread and trying to contribute in
it.

2PC for FDW

The patch provides support for atomic commit for transactions involving
foreign servers. when the transaction makes changes to foreign servers,
either all the changes to all the foreign servers commit or rollback.

The new patch 2PC for FDW include the following things:
1. The patch 0001 introduces a generic feature. All kinds of FDW that
support 2PC such as oracle_fdw, mysql_fdw, postgres_fdw etc. can involve in
the transaction.

Currently we can push some conditions down to shard nodes, especially in
9.6 the directly modify feature has
been introduced. But such a transaction modifying data on shard node is not
executed surely.
Using 0002 patch, that modify is executed with 2PC. It means that we almost
can provide sharding solution using
multiple PostgreSQL server (one parent node and several shared node).

For multi master, we definitely need transaction manager but transaction
manager probably can use this 2PC for FDW feature to manage distributed
transaction.

2. 0002 patch makes postgres_fdw possible to use 2PC.

0002 patch makes postgres_fdw to use below APIs. These APIs are generic
features which can be used by all kinds of FDWs.

a. Execute PREAPRE TRANSACTION and COMMIT/ABORT PREAPRED instead of
COMMIT/ABORT on foreign server which supports 2PC.
b. Manage information of foreign prepared transactions resolver

Masahiko Sawada will post the patch.

Suggestions and comments are helpful to implement this feature.

Regards,

Vinayak Pokale

On Mon, Feb 1, 2016 at 11:14 PM, Alvaro Herrera <alvhe...@2ndquadrant.com>
wrote:

> Alvaro Herrera wrote:
> > Ashutosh Bapat wrote:
> >
> > > Here's updated patch. I didn't use version numbers in file names in my
> > > previous patches. I am starting from this onwards.
> >
> > Um, I tried this patch and it doesn't apply at all.  There's a large
> > number of conflicts.  Please update it and resubmit to the next
> > commitfest.
>
> Also, please run "git show --check" of "git diff origin/master --check"
> and fix the whitespace problems that it shows.  It's an easy thing but
> there's a lot of red squares in my screen.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-02-26 Thread Vinayak Pokale
Hello,

On Fri, Feb 26, 2016 at 6:19 PM, Amit Langote <langote_amit...@lab.ntt.co.jp
> wrote:

>
> Hi Vinayak,
>
> Thanks for updating the patch! A quick comment:
>
> On 2016/02/26 17:28, poku...@pm.nttdata.co.jp wrote:
> >> CREATE VIEW pg_stat_vacuum_progress AS
> >>   SELECT S.s[1] as pid,
> >>  S.s[2] as relid,
> >>  CASE S.s[3]
> >>WHEN 1 THEN 'Scanning Heap'
> >>WHEN 2 THEN 'Vacuuming Index and Heap'
> >>ELSE 'Unknown phase'
> >>  END,
> >>
> >>   FROM pg_stat_get_command_progress(PROGRESS_COMMAND_VACUUM) as S;
> >>
> >> # The name of the function could be other than *_command_progress.
> > The name of function is updated as pg_stat_get_progress_info() and also
> updated the function.
> > Updated the pg_stat_vacuum_progress view as suggested.
>
> So, pg_stat_get_progress_info() now accepts a parameter to distinguish
> different commands.  I see the following in its definition:
>
> +   /*  Report values for only those backends which are
> running VACUUM
> command */
> +   if (cmdtype == COMMAND_LAZY_VACUUM)
> +   {
> +   /*Progress can only be viewed by role member.*/
> +   if (has_privs_of_role(GetUserId(),
> beentry->st_userid))
> +   {
> +   values[2] =
> UInt32GetDatum(beentry->st_progress_param[0]);
> +   values[3] =
> UInt32GetDatum(beentry->st_progress_param[1]);
> +   values[4] =
> UInt32GetDatum(beentry->st_progress_param[2]);
> +   values[5] =
> UInt32GetDatum(beentry->st_progress_param[3]);
> +   values[6] =
> UInt32GetDatum(beentry->st_progress_param[4]);
> +   values[7] =
> UInt32GetDatum(beentry->st_progress_param[5]);
> +   if (beentry->st_progress_param[1] != 0)
> +   values[8] =
> Float8GetDatum(beentry->st_progress_param[2] * 100 /
> beentry->st_progress_param[1]);
> +   else
> +   nulls[8] = true;
> +   }
> +   else
> +   {
> +   nulls[2] = true;
> +   nulls[3] = true;
> +   nulls[4] = true;
> +   nulls[5] = true;
> +   nulls[6] = true;
> +   nulls[7] = true;
> +   nulls[8] = true;
> +   }
> +   }
>
> How about doing this in a separate function which takes the command id as
> parameter and returns an array of values and the number of values (per
> command id). pg_stat_get_progress_info() then creates values[] and nulls[]
> arrays from that and returns that as result set.  It will be a cleaner
> separation of activities, perhaps.
>
> +1

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-26 Thread Vinayak Pokale
Hi,

Please find attached updated patch with an updated interface.

On Jan 26, 2016 11:22 AM, "Vinayak Pokale" <vinpok...@gmail.com> wrote:
>
> Hi Amit,
>
> Thank you for reviewing the patch.
>
> On Jan 26, 2016 9:51 AM, "Amit Langote" <langote_amit...@lab.ntt.co.jp>
wrote:
> >
> >
> > Hi Vinayak,
> >
> > On 2016/01/25 20:58, Vinayak Pokale wrote:
> > > Hi,
> > >
> > > Please find attached updated patch with an updated interface.
> > >
> >
> > Thanks for updating the patch.
> >
> > > I added the below interface to update the
> > > scanned_heap_pages,scanned_index_pages and index_scan_count only.
> > > void pgstat_report_progress_scanned_pages(int num_of_int, uint32
> > > *progress_scanned_pages_param)
> >
> > I think it's still the same interface with the names changed. IIRC, what
> > was suggested was to provide a way to not have to pass the entire array
> > for the update of a single member of it. Just pass the index of the
> > updated member and its new value. Maybe, something like:
> >
> > void pgstat_progress_update_counter(int index, uint32 newval);
> >
> > The above function would presumably update the value of
> > beentry.st_progress_counter[index] or something like that.

Following interface functions are added:

/*
 * index: in the array of uint32 counters in the beentry
 * counter: new value of the (index)th counter
 */
void
pgstat_report_progress_update_counter(int index, uint32 counter)

/*
called to updatet the VACUUM progress phase.
msg: new value of (index)th message
*/
void
pgstat_report_progress_update_message(int index, char
msg[N_PROGRESS_PARAM][PROGRESS_MESSAGE_LENGTH])

Regards,
Vinayak


Vacuum_progress_checker_v10.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-25 Thread Vinayak Pokale
Hi,

Please find attached updated patch with an updated interface.

I added the below interface to update the
scanned_heap_pages,scanned_index_pages and index_scan_count only.
void pgstat_report_progress_scanned_pages(int num_of_int, uint32
*progress_scanned_pages_param)

Other interface functions which are called at the beginning:
void pgstat_report_progress_set_command_target(Oid relid)

Regards,
Vinayak

On Wed, Jan 13, 2016 at 3:16 PM, Amit Langote <langote_amit...@lab.ntt.co.jp
> wrote:

> On 2016/01/12 11:28, Vinayak Pokale wrote:
> > On Jan 12, 2016 11:22 AM, "Amit Langote" <langote_amit...@lab.ntt.co.jp>
> > wrote:
> >>
> >> On 2016/01/12 10:30, Amit Langote wrote:
> >>> I'm slightly concerned that the latest patch doesn't incorporate any
> >>> revisions to the original pgstat interface per Robert's comments in
> [1].
> >>
> >> I meant to say "originally proposed pgstat interface on this thread".
> >
> > Yes.
> > Robert's comments related to pgstat interface needs to be address.
> > I will update it.
>
> So, I updated the patch status to "Waiting on Author".
>
> Thanks,
> Amit
>
>
>


Vacuum_progress_checker_v9.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-25 Thread Vinayak Pokale
Hi Amit,

Thank you for reviewing the patch.
On Jan 26, 2016 9:51 AM, "Amit Langote" <langote_amit...@lab.ntt.co.jp>
wrote:
>
>
> Hi Vinayak,
>
> On 2016/01/25 20:58, Vinayak Pokale wrote:
> > Hi,
> >
> > Please find attached updated patch with an updated interface.
> >
>
> Thanks for updating the patch.
>
> > I added the below interface to update the
> > scanned_heap_pages,scanned_index_pages and index_scan_count only.
> > void pgstat_report_progress_scanned_pages(int num_of_int, uint32
> > *progress_scanned_pages_param)
>
> I think it's still the same interface with the names changed. IIRC, what
> was suggested was to provide a way to not have to pass the entire array
> for the update of a single member of it. Just pass the index of the
> updated member and its new value. Maybe, something like:
>
> void pgstat_progress_update_counter(int index, uint32 newval);
>
> The above function would presumably update the value of
> beentry.st_progress_counter[index] or something like that.

Understood. I will update the patch.

Regards,
Vinayak


[HACKERS] Typo in sequence.c

2016-01-14 Thread Vinayak Pokale
Hi,

I found a typo in sequence.c
Please check the attached patch.

Regards,
Vinayak


typo-sequence-c.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-11 Thread Vinayak Pokale
On Jan 12, 2016 11:22 AM, "Amit Langote" <langote_amit...@lab.ntt.co.jp>
wrote:
>
> On 2016/01/12 10:30, Amit Langote wrote:
> > I'm slightly concerned that the latest patch doesn't incorporate any
> > revisions to the original pgstat interface per Robert's comments in [1].
>
> I meant to say "originally proposed pgstat interface on this thread".

Yes.
Robert's comments related to pgstat interface needs to be address.
I will update it.

Regards,
Vinayak Pokale


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2016-01-11 Thread Vinayak Pokale
Hi Sudhir,

On Jan 7, 2016 3:02 AM, "Sudhir Lonkar-2" <sudhir.lon...@nttdata.com> wrote:
>
> Hello,
> >Please find attached patch addressing following comments
> I have tested this patch.
> It is showing empty (null) in phase column of pg_stat_vacuum_progress,
when
> I switched to a unprivileged user.
> In the previous patch, it is showing  in phase
> column.
Yes. I will update the patch.
Regards,
Vinayak Pokale


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2015-12-25 Thread Vinayak Pokale
Hi,
Please find attached patch addressing following comments.

>The relation OID should be reported and not its name. In case of a
>relation rename that would not be cool for tracking, and most users
>are surely going to join with other system tables using it.
The relation OID is reported instead of relation name.
The following interface function is called at the beginning to report the
relation OID once.
void pgstat_report_command_target(Oid relid)

>Regarding pg_stat_get_vacuum_progress(): I think a backend can simply be
>skipped if (!has_privs_of_role(GetUserId(), beentry->st_userid)) (cannot
>put that in plain English, :))
Updated in the attached patch.

In the previous patch, ACTIVITY_IS_VACUUM is set unnecessarily for
VACOPT_FULL and they are not covered by lazy_scan_heap().
I have updated it in attached patch and also renamed ACTIVITY_IS_VACUUM to
COMMAND_LAZY_VACUUM.

Added documentation for view.
Some more comments need to be addressed.

Regards,

Vinayak Pokale

On Sat, Dec 12, 2015 at 2:07 AM, Robert Haas <robertmh...@gmail.com> wrote:

> On Fri, Dec 11, 2015 at 1:25 AM, Michael Paquier
> <michael.paqu...@gmail.com> wrote:
> >> For another thing, there are definitely going to be
> >> some people that want the detailed information - and I can practically
> >> guarantee that if we don't make it available, at least one person will
> >> write a tool that tries to reverse-engineer the detailed progress
> >> information from whatever we do report.
> >
> > OK, so this justifies the fact of having detailed information, but
> > does it justify the fact of having precise and accurate data? ISTM
> > that having detailed information and precise information are two
> > different things. The level of details is defined depending on how
> > verbose we want the information to be, and the list you are giving
> > would fulfill this requirement nicely for VACUUM. The level of
> > precision/accuracy at which this information is provided though
> > depends at which frequency we want to send this information. For
> > long-running VACUUM it does not seem necessary to update the fields of
> > the progress tracker each time a counter needs to be incremented.
> > CLUSTER has been mentioned as well as a potential target for the
> > progress facility, but it seems that it enters as well in the category
> > of things that would need to be reported on a slower frequency pace
> > than "each-time-a-counter-is-incremented".
> >
> > My impression is just based on the needs of VACUUM and CLUSTER.
> > Perhaps I am lacking imagination regarding the potential use cases of
> > the progress facility though in cases where we'd want to provide
> > extremely precise progress information :)
> > It just seems to me that this is not a requirement for VACUUM or
> > CLUSTER. That's all.
>
> It's not a hard requirement, but it should be quite easy to do without
> adding any significant overhead.  All you need to do is something
> like:
>
> foo->changecount++;
> pg_write_barrier();
> foo->count_of_blocks++;
> pg_write_barrier();
> foo->changecount++;
>
> I suspect that's plenty cheap enough to do for every block.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Vacuum_progress_checker_v8.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2015-11-30 Thread Vinayak
Thanks for the v7.
Please check the comment below.
-Table name in the vacuum progress

+ snprintf(progress_message[0], PROGRESS_MESSAGE_LENGTH, "%s.%s",
schemaname,relname);

In the vacuum progress, column table_name is showing first 30 characters of
table name.
postgres=# create table test_vacuum_progress_in_postgresql(c1 int,c2 text);
postgres=# select * from pg_stat_vacuum_progress ;
-[ RECORD 1 ]---+--
pid | 12284
table_name  | public.test_vacuum_progress_i
phase   | Scanning Heap
total_heap_pages| 41667
scanned_heap_pages  | 25185
percent_complete| 60
total_index_pages   | 0
scanned_index_pages | 0
index_scan_count| 0




-
Regards,
Vinayak,

--
View this message in context: 
http://postgresql.nabble.com/PROPOSAL-VACUUM-Progress-Checker-tp5855849p5875614.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers