Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-10-04 Thread Vaishnavi Prabakaran
On Mon, Oct 2, 2017 at 8:31 PM, Daniel Gustafsson <dan...@yesql.se> wrote:

> > On 13 Sep 2017, at 07:44, Vaishnavi Prabakaran <
> vaishnaviprabaka...@gmail.com> wrote:
> >
> > On Wed, Sep 13, 2017 at 3:33 PM, Craig Ringer <cr...@2ndquadrant.com
> <mailto:cr...@2ndquadrant.com>> wrote:
> >
> > I really do not like calling it "commit" as that conflates with a
> database commit.
> >
> > A batch can embed multiple BEGINs and COMMITs. It's entirely possible
> for an earlier part of the batch to succeed and commit, then a later part
> to fail, if that's the case. So that name is IMO wrong.
> >
> > Ok, SendQueue seems ok to me as well. Will change it in next version.
> >
> > +"a"?
> >
> > Hmm, Can you explain the question please. I don't understand.
> >
> > s/of new query/of a new query/
> >
> > Thanks for explaining. Will change this too in next version.
>
> Based on the discussions in this thread, and that a new version hasn’t been
> submitted, I’m marking this Returned with Feedback.  Please re-submit the
> new
> version in an upcoming commitfest when ready.



Thanks for the suggestion and, OK I will create a new patch in upcoming
commitfest with attached patch addressing above review comments.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v14.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Refactor handling of database attributes between pg_dump and pg_dumpall

2017-09-28 Thread Vaishnavi Prabakaran
Hi,


On Sat, Sep 9, 2017 at 1:29 PM, Haribabu Kommi 
wrote:

>
> Fixed patch is attached.
>


Patch applies and has lot of noise due to indent with spaces.
I did ran regression tests located in - src/test/regress,
src/test/modules/test_pg_dump, src/bin/pg_dump, src/bin/pg_upgrade folders
and no issues observed.



+  --enable-pgdumpall-behaviour
+  
+   
+This option is for the use of pg_dumpall or
+pg_upgrade utility to dump the database objects
+by pg_dump for a complete dump.
+This option can only be used with -C/--create.
+Its use for other ourposes is not recommended or supported.
+The behavior of the option may change in future releases without
notice.
+   
+  
+ 
+
+ 

s/ourposes/purposes/


Option name "--enable-pgdumpall-behaviour"  is very generic and it is
better to rename it to something that reflects its functionality like
--skip-default-db-create/--no-default-db-create

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] Simplify ACL handling for large objects and removal of superuser() checks

2017-09-25 Thread Vaishnavi Prabakaran
On Tue, Sep 26, 2017 at 11:45 AM, Michael Paquier <michael.paqu...@gmail.com
> wrote:

> On Tue, Sep 26, 2017 at 9:04 AM, Vaishnavi Prabakaran
> <vaishnaviprabaka...@gmail.com> wrote:
> > Yes, I did realize on further reading the patch and what led to the
> > confusion is that in the 3rd patch , updated documentation(copied below)
> > still says that reading from a descriptor opened with INV_WRITE is
> possible.
> > I think we need some correction here to reflect the modified code
> behavior.
> >
> > + or other transactions.  Reading from a descriptor opened with
> > + INV_WRITE or INV_READ |
> > + INV_WRITE returns data that reflects all writes of
> > + other committed transactions as well as writes of the current
> > + transaction.
>
> Indeed, you are right. There is an error here. This should read as
> "INV_READ | INV_WRITE" only. Using "INV_WRITE" implies that reads
> cannot happen.
>
>
Thanks for correcting.

I moved the cf entry to "ready for committer", and though my vote is
for keeping
the existing API behavior with write implying read, I let the committer
decide whether the following behavior change is Ok or not.
"Reading from a large-object descriptor opened with INV_WRITE is NOT
possible"

Thanks & Regards,
Vaishnavi
Fujitsu Australia.


Re: [HACKERS] Replication status in logical replication

2017-09-25 Thread Vaishnavi Prabakaran
Hi,

On Wed, Sep 13, 2017 at 9:59 AM, Daniel Gustafsson  wrote:

>
> I’m not entirely sure why this was flagged as "Waiting for Author” by the
> automatic run, the patch applies for me and builds so resetting back to
> “Needs
> review”.
>
>
This patch applies and build cleanly and I did a testing with one publisher
and one subscriber, and confirm that the replication state after restarting
the server now is "streaming" and not "Catchup".

And, I don't find any issues with code and patch to me is ready for
committer, marked the same in cf entry.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] Simplify ACL handling for large objects and removal of superuser() checks

2017-09-25 Thread Vaishnavi Prabakaran
Hi,

On Tue, Sep 19, 2017 at 5:12 PM, Michael Paquier 
wrote:

>
> >>@@ -163,22 +150,16 @@ lo_read(int fd, char *buf, int len)
> >> 
> >> + if ((lobj->flags & IFS_RDLOCK) == 0)
> >>+ ereport(ERROR,
> >>+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
> >>+ errmsg("large object descriptor %d was not opened for reading",
> >>+ fd)));
> >
> > Do we ever reach this error? Because per my understanding
>
> This error can be reached, and it is part of the regression tests. One
> query which passed previously is now failing:
> +SELECT loread(lo_open(1001, x'2'::int), 32);   -- fail, wrong mode
> +ERROR:  large object descriptor 0 was not opened for reading



Yes, I did realize on further reading the patch and what led to the
confusion is that in the 3rd patch , updated documentation(copied below)
still says that reading from a descriptor opened with INV_WRITE is
possible. I think we need some correction here to reflect the modified code
behavior.

+ or other transactions.  Reading from a descriptor opened with
+ INV_WRITE or INV_READ |
+ INV_WRITE returns data that reflects all writes of
+ other committed transactions as well as writes of the current
+ transaction.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


[HACKERS] [PATCH] Minor patch to correct symbol name in logs

2017-09-19 Thread Vaishnavi Prabakaran
Hi,

Backend's lo_functions were renamed to avoid conflicting with libpq
prototypes in commit - 6fc547960dbe0b8bd6cefae5ab7ec3605a5c46fc.

Logs inside those functions still refer to old symbol names, Here is the
small patch to update the same.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Debug-log-correction-to-reflect-changed-function-nam.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Simplify ACL handling for large objects and removal of superuser() checks

2017-09-18 Thread Vaishnavi Prabakaran
Hi,


On Mon, Aug 14, 2017, Michael Paquier  wrote:
>Attached is a set of 3 patches:

I tried to review the patch and firstly patch applies cleanly without any
noise.


>- 0001 removes ALLOW_DANGEROUS_LO_FUNCTIONS

I think it is better to remove "ALLOW_DANGEROUS_LO_FUNCTIONS" from
pg_config_manual.h as well.


>- 0002 replaces the superuser checks with GRANT permissions
>- 0003 refactors the LO code to improve the ACL handling. Note that
>this patch introduces a more debatable change: now there is no
>distinction between INV_WRITE and INV_WRITE|INV_READ, as when
>INV_WRITE is used INV_READ is implied. The patch as proposed does a
>complete split of both permissions to give a system more natural and
>more flexible. The current behavior is documented.

>@@ -163,22 +150,16 @@ lo_read(int fd, char *buf, int len)
> 
> + if ((lobj->flags & IFS_RDLOCK) == 0)
>+ ereport(ERROR,
>+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
>+ errmsg("large object descriptor %d was not opened for reading",
>+ fd)));


Do we ever reach this error? Because per my understanding

1) if the application didn't call lo_open before lo_read, then file
descriptor validation in the beginning of this function should capture it.

2) if the application called lo_open only with write mode should be able to
read as well per documentation.

Ok, in the inv_open(),  IFS_RDLOCK is set only when explicit READ mode is
requested. So, it causes lo_read failure when large-object is opened in
WRITE mode? I think documented behavior is more appropriate and post ACL
checks for select and update permissions in inv_open(), IFS_RDLOCK flag
should be set regardless of mode specified.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-09-12 Thread Vaishnavi Prabakaran
On Wed, Sep 13, 2017 at 3:33 PM, Craig Ringer  wrote:

>
> I really do not like calling it "commit" as that conflates with a database
> commit.
>
> A batch can embed multiple BEGINs and COMMITs. It's entirely possible for
> an earlier part of the batch to succeed and commit, then a later part to
> fail, if that's the case. So that name is IMO wrong.
>

Ok, SendQueue seems ok to me as well. Will change it in next version.



>>> +"a"?
>>>
>>
>> Hmm, Can you explain the question please. I don't understand.
>>
>
> s/of new query/of a new query/
>
>
Thanks for explaining. Will change this too in next version.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-09-12 Thread Vaishnavi Prabakaran
On Wed, Aug 23, 2017 at 7:40 PM, Andres Freund  wrote:

>
>
>
> > Am failing to see the benefit in allowing user to set
> > PQBatchAutoFlush(true|false) property? Is it really needed?
>
> I'm inclined not to introduce that for now. If somebody comes up with a
> convincing usecase and numbers, we can add it later. Libpq API is set in
> stone, so I'd rather not introduce unnecessary stuff...
>
>
Thanks for reviewing the patch and yes ok.


>
>
> > +   
> > +Much like asynchronous query mode, there is no performance
> disadvantage to
> > +using batching and pipelining. It increases client application
> complexity
> > +and extra caution is required to prevent client/server deadlocks but
> > +can sometimes offer considerable performance improvements.
> > +   
>
> That's not necessarily true, is it? Unless you count always doing
> batches of exactly size 1.
>

Client application complexity is increased in batch mode,because
application needs to remember the query queue status. Results processing
can be done at anytime, so the application needs to know till what query,
the results are consumed.


> +   
> > +Use batches when your application does lots of small
> > +INSERT, UPDATE and
> > +DELETE operations that can't easily be
> transformed into
> > +operations on sets or into a
> > +COPY
> operation.
> > +   
>
> Aren't SELECTs also a major beneficiarry of this?
>


Hmm, though SELECTs also benefit from batch mode, doing multiple selects in
batch mode will fill up the memory rapidly and might not be as beneficial
as other operations listed.



> > +   
> > +Batching is less useful when information from one operation is
> required by the
> > +client before it knows enough to send the next operation.
>
> s/less/not/
>
>
Corrected.


>
> > +   
> > +
> > + The batch API was introduced in PostgreSQL 10.0, but clients using
> PostgresSQL 10.0 version of libpq can
> > + use batches on server versions 8.4 and newer. Batching works on
> any server
> > + that supports the v3 extended query protocol.
> > +
> > +   
>
> Where's the 8.4 coming from?
>
>

I guess it is 7.4 where "PQsendQueryParams" is introduced, and not 8.4.
Corrected.


> +   
> > +
> > + It is best to use batch mode with libpq
> in
> > + non-blocking mode.
> If used in
> > + blocking mode it is possible for a client/server deadlock to
> occur. The
> > + client will block trying to send queries to the server, but the
> server will
> > + block trying to send results from queries it has already processed
> to the
> > + client. This only occurs when the client sends enough queries to
> fill its
> > + output buffer and the server's receive buffer before switching to
> > + processing input from the server, but it's hard to predict exactly
> when
> > + that'll happen so it's best to always use non-blocking mode.
> > +
> > +   
>
> Mention that nonblocking only actually helps if send/recv is done as
> required, and can essentially require unbound memory?  We probably
> should either document or implement some smarts about when to signal
> read/write readyness. Otherwise we e.g. might be receiving tons of
> result data without having sent the next query - or the other way round.
>
>

Added a statement for caution in documentation and again this is one of the
reason why SELECT query is not so beneficial in batch mode.



> Maybe note that multiple batches can be "in flight"?
> I.e. PQbatchSyncQueue() is about error handling, nothing else? Don't
> have a great idea, but we might want to rename...
>
>
This function not only does error handling, but also sends the "Sync"
message to backend. In batch mode, "Sync" message is not sent with every
query but will
be sent only via this function to mark the end of implicit transaction.
Renamed it to PQbatchCommitQueue. Kindly let me know if you think of any
other better name.



> > +
> > + 
> > +  The client must not assume that work is committed when it
> > +  sends a COMMIT, only when
> the
> > +  corresponding result is received to confirm the commit is
> complete.
> > +  Because errors arrive asynchronously the application needs to be
> able to
> > +  restart from the last received committed
> change and
> > +  resend work done after that point if something goes wrong.
> > + 
> > +
>
> This seems fairly independent of batching.
>
>
Yes and the reason why is it explicitly specified for batch mode is that if
more than one explicit transactions are used in Single batch, then failure
of one transaction will lead to skipping the consequent transactions until
the end of current batch is reached. This behavior is specific to batch
mode, so adding a precautionary note here is needed I think.



> > +   
> > +
> > +   
> > +Interleaving result processing and query dispatch
> > +
> > +
> > + To avoid deadlocks on large batches the client should be
> 

Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-08-09 Thread Vaishnavi Prabakaran
Andres Freund wrote :
> If you were to send a gigabyte of queries, it'd buffer them all up in
memory... So some
>more intelligence is going to be needed.

Firstly, sorry for the delayed response as I got busy with other
activities.

To buffer up the queries before flushing them to the socket, I think
"conn->outCount>=65536" is ok to use, as 64k is considered to be safe in
Windows as per comments in pqSendSome() API.

Attached the code patch to replace pqFlush calls with pqBatchFlush in the
asynchronous libpq function calls flow.
Still pqFlush is used in "PQbatchSyncQueue" and
"PQexitBatchMode" functions.

Am failing to see the benefit in allowing user to set
PQBatchAutoFlush(true|false) property? Is it really needed?

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v12.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-06-19 Thread Vaishnavi Prabakaran
On Tue, Jun 20, 2017 at 8:49 AM, Andres Freund <and...@anarazel.de> wrote:

> On 2017-04-05 15:45:26 -0700, Andres Freund wrote:
> > Hi,
> >
> > On 2017-04-05 17:00:42 +1000, Vaishnavi Prabakaran wrote:
> > > Regarding test patch, I have corrected the test suite after David
> Steele's
> > > comments.
> > > Also, I would like to mention that a companion patch was submitted by
> David
> > > Steele up-thread.
> > >
> > > Attached the latest code and test patch.
> >
> > My impression is that this'll need a couple more rounds of review. Given
> > that this'll establish API we'll pretty much ever going to be able to
> > change/remove, I think it'd be a bad idea to rush this into v10.
> > Therefore I propose moving this to the next CF.
>
> Craig, Vaishnavi, everyone else: Are you planning to continue to work on
> this for v11?  I'm willing to do another round, but only if it's
> worthwhile.
>

Yes, am willing to continue working on this patch for v11.


>
> FWIW, I still think this needs a pgbench or similar example integration,
> so we can actually properly measure the benefits.
>

I will investigate on this further.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] Error-like LOG when connecting with SSL for password authentication

2017-05-22 Thread Vaishnavi Prabakaran
On Mon, May 22, 2017 at 5:10 PM, Michael Paquier 
wrote:

> If the protocol version is SSL
> 3.0 or TLS 1.0, this result code is returned only if a closure alert
> has occurred in the protocol, i.e. if the connection has been closed
> cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
> necessarily indicate that the underlying transport has been closed.
>

I guess this error code exist even for SSL2 protocol, In that case, don't
we need to keep the current code for this error code?

Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] Disallowing multiple queries per PQexec()

2017-05-17 Thread Vaishnavi Prabakaran
On Thu, May 18, 2017 at 2:56 AM, Surafel Temesgen 
wrote:

> Sorry for being very late. I also think guc version of the patch can be
> acceptable and useful.
>
> I modified the patch as such and added to commitfest 2017-07.
>
>
>
You need documentation changes in "libpq - C Library" chapter's PQexec
section, where it is mentioned that command string can contain multiple SQL
commands and explain how it behaves.

I think GUC's name can be something like "multiple_query_execution" and
setting it ON/OFF will be better. I think others will also come up with
some suggestions here as the current name doesn't go well with other
existing GUCs.

Also, consider adding this new GUC in postgresql.conf.sample file.

Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] COPY FROM STDIN behaviour on end-of-file

2017-05-16 Thread Vaishnavi Prabakaran
On Wed, May 17, 2017 at 3:52 AM, Alvaro Herrera 
wrote:

> Tom Lane wrote:
>
> > BTW, it would be a good idea for somebody to check this out on Windows,
> > assuming there's a way to generate a keyboard EOF signal there.
>
> I last used a Windows command line almost two decades ago now, but
> Ctrl-Z used to do it.
>

Ctrl-Z + Enter in windows generates EOF signal. I verified this issue and
it is not reproducible in windows.

Thanks & Regards,
Vaishnavi
Fujitsu Australia.


Re: [HACKERS] COPY FROM STDIN behaviour on end-of-file

2017-05-16 Thread Vaishnavi Prabakaran
On Tue, May 16, 2017 at 8:40 AM, Thomas Munro  wrote:

>
> I you hit ^d while COPY FROM STDIN is reading then subsequent COPY
> FROM STDIN commands return immediately.


Hi, I could not reproduce this issue. Even after Ctrl+d , subsequent COPY
from commands reads the input properly. Is there any specific step you
followed or can you share the sample testcase?

Thanks & Regards,
Vaishnavi
Fujitsu Australia.


Re: [HACKERS] Removal of plaintext password type references

2017-05-10 Thread Vaishnavi Prabakaran
On Wed, May 10, 2017 at 5:58 PM, Heikki Linnakangas <hlinn...@iki.fi> wrote:

> On 05/10/2017 08:01 AM, Michael Paquier wrote:
>
>> On Wed, May 10, 2017 at 10:51 AM, Vaishnavi Prabakaran
>> <vaishnaviprabaka...@gmail.com> wrote:
>>
>>> Following recent removal of support to store password in plain text,
>>> modified the code to
>>>
>>> 1. Remove  "PASSWORD_TYPE_PLAINTEXT" macro
>>> 2. Instead of using "get_password_type" to retrieve the encryption method
>>> AND to check if the password is already encrypted or not, modified the
>>> code
>>> to
>>> a. Use "get_password_encryption_type" function to retrieve encryption
>>> method.
>>> b. Use "isPasswordEncrypted" function to check if the password is already
>>> encrypted or not.
>>>
>>> These changes are mainly to increase code readability and does not change
>>> underlying functionality.
>>>
>>
>> Actually, this patch reduces readability.
>>
>
> Yeah, I don't think this is an improvement. Vaishnavi, did you find the
> current code difficult? Perhaps some extra comments somewhere would help?
>

Thanks for willing to add extra comments, And current code is not difficult
but kind of gives the impression that still plaintext password storage
exists by holding "PASSWORD_TYPE_PLAINTEXT" macro and having switch case
checks for this macro.

I see "get_password_type" function call is used for 2 purposes. One is to
check the actual password encryption type during authentication process and
another purpose is to verify if the password passed is encrypted or not -
used in create/alter role command before checking the strength of
password(via passwordcheck module). So, I thought my patch will make this
purpose clear.  Nevertheless, if people thinks this actually reduces their
readability then I don't mind to go with the flow.


Thanks & Regards,
Vaishnavi
Fujitsu Australia.


[HACKERS] Removal of plaintext password type references

2017-05-09 Thread Vaishnavi Prabakaran
Hi All,

Following recent removal of support to store password in plain text,
modified the code to

1. Remove  "PASSWORD_TYPE_PLAINTEXT" macro
2. Instead of using "get_password_type" to retrieve the encryption method
AND to check if the password is already encrypted or not, modified the code
to
a. Use "get_password_encryption_type" function to retrieve encryption
method.
b. Use "isPasswordEncrypted" function to check if the password is already
encrypted or not.

These changes are mainly to increase code readability and does not change
underlying functionality.

Attached the patch for community's review.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Removal-of-plaintext-password-reference.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] check with serial

2017-05-02 Thread Vaishnavi Prabakaran
On Tue, May 2, 2017 at 11:30 PM, Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:

>
> Here's a simple patch that does what I had in mind. It allows providing
> for an arbitrary schedule file in both the check and installcheck
> recipes. The historic behaviour is preserved.
>
>
Hmm, installcheck command with SCHEDULE set as "Parallel" does not honor
"MAXCONNOPT" settings in the attached patch.

And, now after your patch, do we still need "installcheck-parallel"
command? It is redundant IMO, just give a thought.

Documentation changes("Running the Tests") are also required as the
behavior documented is now changed in this patch.

Best Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] vcregress support for single TAP tests

2017-05-01 Thread Vaishnavi Prabakaran
On Mon, May 1, 2017 at 11:01 PM, Andrew Dunstan <
andrew.duns...@2ndquadrant.com> wrote:

>
>
> In the absence of further comments I'm going to apply this and
> back-patch it so we can get a significant improvement in how the
> buildfarm reports results from TAP tests, as well as increased coverage,
> on Windows/MSVC machines.
>
>
I see vcregress is not processing the second argument passed to it. For
example, "vcregress.bat check serial" runs the regress test in parallel
mode.
Though this issue is present even before this patch is applied, am writing
here because this patch touches the argument passing to sub-routine logic.
Attached is the one fix I find it working for me.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Argument-passing-to-sub-routine-corrected-in-vcregre.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PostgreSQL 10] default of hot_standby should be "on"?

2017-04-26 Thread Vaishnavi Prabakaran
On Wed, Apr 26, 2017 at 9:52 PM, Magnus Hagander 
wrote:

>
> I wonder if we should also consider changing the standby error message to
> be a WARNING instead of an ERROR. So that if you try to start up a standby
> with hot_standby=on but master with wal_level=replica it would turn into a
> cold standby.
>
> Perhaps, you mean hot_standby=off here?

Regards,
Vaishnavi
Fujitsu Australia.


[HACKERS] Question about one of the old Autonomous Transaction approach

2017-04-06 Thread Vaishnavi Prabakaran
Hi All,

Regarding the discussion about Autonomous transaction in below message ID,
long time ago, it has been specified that having a new structure "Struct
PGAutonomousXACT" was rejected in PGCon hackers meeting. Can anyone know
why is it been rejected? What is the disadvantage/problem identified with
that approach?

ca+u5nmkeum4abrqbndlyt5ledektae8rbiyw3977yhmeowq...@mail.gmail.com



I tried to look for answers going through various mails related to
Autonomous transaction with no luck. Any answer or hint about where to look
for answers will be helpful.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-04-03 Thread Vaishnavi Prabakaran
Hi,
On Tue, Apr 4, 2017 at 7:05 AM, Andres Freund <and...@anarazel.de> wrote:

> On 2017-04-03 14:10:47 +1000, Vaishnavi Prabakaran wrote:
> > > The CF has been extended until April 7 but time is still growing short.
> > > Please respond with a new patch by 2017-04-04 00:00 AoE (UTC-12) or
> this
> > > submission will be marked "Returned with Feedback".
> > >
> > >
> > Thanks for the information, attached the latest patch resolving one
> > compilation warning. And, please discard the test patch as it will be
> > re-implemented later separately.
>
> Hm.  If the tests aren't ready yet, it seems we'll have to move this to
> the next CF.
>
>
Thanks for your review and I will address your review comments and send the
newer version of patch shortly.
Just quickly, Is it not ok to consider only the code patch for this CF
without test patch?

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-04-02 Thread Vaishnavi Prabakaran
On Sat, Apr 1, 2017 at 2:03 AM, David Steele <da...@pgmasters.net> wrote:

> Hi,
>
> On 3/30/17 2:12 PM, Daniel Verite wrote:
>
>> Vaishnavi Prabakaran wrote:
>>
>> Hmm, With batch mode, after sending COPY command to server(and server
>>> started processing the query and goes into COPY state) , client does not
>>> immediately read the result , but it keeps sending other queries to the
>>> server. By this time, server already encountered the error
>>> scenario(Receiving different message during COPY state) and sent error
>>> messages
>>>
>>
>> IOW, the test intentionally violates the protocol and then all goes wonky
>> because of that.
>> That's why I was wondering upthread what's it's supposed to test.
>> I mean, regression tests are meant to warn against a desirable behavior
>> being unknowingly changed by new code into an undesirable behavior.
>> Here we have the undesirable behavior to start with.
>> What kind of regression could we fear from that?
>>
>
Yes, completely agree, demonstrating the undesirable behavior is not needed
as documentation gives enough warning to user.
The test patch is decided not to go in for now, but will be re-implemented
with PSQL commands later. So, during the re-implementation of test patch, I
will remove this test. Thanks .


>
> The CF has been extended until April 7 but time is still growing short.
> Please respond with a new patch by 2017-04-04 00:00 AoE (UTC-12) or this
> submission will be marked "Returned with Feedback".
>
>
Thanks for the information, attached the latest patch resolving one
compilation warning. And, please discard the test patch as it will be
re-implemented later separately.


Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v9.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-29 Thread Vaishnavi Prabakaran
On Thu, Mar 30, 2017 at 12:08 PM, Michael Paquier <michael.paqu...@gmail.com
> wrote:

> On Wed, Mar 29, 2017 at 12:40 PM, Vaishnavi Prabakaran
> <vaishnaviprabaka...@gmail.com> wrote:
> > Michael Paquier wrote:
> >>Could you as well update src/tools/msvc/vcregress.pl, aka the routine
> >>modulescheck so as this new test is skipped. I am sure that nobody
> >>will scream if this test is not run on Windows, but the buildfarm will
> >>complain if the patch is let in its current state.
> >
> > Thanks for the finding. Hmm, modulescheck will not pick up test_libpq as
> > "subdircheck" will fail for this module(because it does not have
> > "sql"/"expected" folders in it) and hence it will be skipped.
> > But, Modified "Mkvcbuild.pm" to exclude "test_libpq" module, as this test
> > anyways will not be run in Windows and also because it uses linux
> specific
> > APIs(gettimeofday , timersub).
>
> src/port/gettimeofday.c extends things on Windows, and we could
> consider to just drop the timing portion for simplicity (except
> Windows I am not seeing any platform missing timersub). But that's
> just a point of detail. Or we could just drop those tests from the
> final patch, and re-implement them after having some psql-level
> subcommands. That would far better for portability.
>

Yes agree, having tests with psql meta commands will be more easy to
understand also.  And, that is one of the reason I separated the patch into
two - code and test . So, yes, we can drop the test patch for now.


>
> > There are 2 cases tested here:
> >
> > 1. Example for the case - Using COPY command in batch mode will trigger
> > failure. (Specified in documentation)
> > Failure can be seen only when processing the copy command's results. That
> > is, after dispatching COPY command, server goes into COPY state , but the
> > client dispatched another query in batch mode.
> >
> > Below error messages belongs to this case :
> > Error status and message got from server due to COPY command failure are
> :
> > PGRES_FATAL_ERROR ERROR:  unexpected message type 0x50 during COPY from
> > stdin
> > CONTEXT:  COPY batch_demo, line 1
> >
> > message type 0x5a arrived from server while idle
>
> Such messages are really confusing to the user as they refer to the
> protocol going out of sync. I would think that something like "cannot
> process COPY results during a batch processing" would be cleaner.
> Isn't some more error handling necessary in GetCopyStart()?


Hmm, With batch mode, after sending COPY command to server(and server
started processing the query and goes into COPY state) , client does not
immediately read the result , but it keeps sending other queries to the
server. By this time, server already encountered the error
scenario(Receiving different message during COPY state) and sent error
messages. So, when client starts reading the result, already error messages
sent by server are present in socket.  I think trying to handle during
result processing is more like masking the server's error message with new
error message and I think it is not appropriate.



>
>
> Attached the latest code and test patches.
>
> For me the test is still broken:
> ok 1 - testlibpqbatch disallowed_in_batch
> ok 2 - testlibpqbatch simple_batch
> ok 3 - testlibpqbatch multi_batch
> ok 4 - testlibpqbatch batch_abort
> ok 5 - testlibpqbatch timings
> not ok 6 - testlibpqbatch copyfailure
>
> Still broken here. I can see that this patch is having enough
> safeguards in PQbatchBegin() and PQsendQueryStart(), but this issue is
> really pointing out at something...
>
>
I will check this one with fresh setup again and I guess that this should
not block the code patch.

Thanks & Regards,
Vaishnavi
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-27 Thread Vaishnavi Prabakaran
Thanks Craig and Michael for confirming that "PQsetSingleRowMode" should be
called right after "PQbatchQueueProcess".

Michael Paquier wrote:

> It seems to me that
>this should also be effective only during the fetching of one single
>result set. When the client moves on to the next item in the queue we
>should make necessary again a call to PQsetSingleRowMode().

Yes, the current behavior(with V6 code patch) is exactly the same as you
described above. PQsetSingleRowMode() should be called each time after
"PQbatchQueueProcess"
to set result processing to single-row mode.

>src/test/modules/Makefile should include test_libpq.

Yes, added test_libpq to the list in Makefile.

Daniel Verite wrote:

>> Please let me know if you think this is not enough but wanted to update
>> section 33.6 also?

>Yes, if the right place to call PQsetSingleRowMode() is immediately
>after PQbatchQueueProcess(), I think we need to update
>"33.6. Retrieving Query Results Row-By-Row"
>with that information, otherwise what it says is just wrong
>when in batch mode.

Yes, I have updated Chapter 33.6 by adding note for batch mode.

>Also, in 33.5, I suggest to not use the "currently executing
>query" as a synonym for the "query currently being processed"
>to avoid any confusion between what the backend is executing
>and a prior query whose results are being collected by the client
>at the same moment.

Yes correct, I modified the words to "query currently being processed" as
suggested.



> One bit of functionality that does not work in batch mode and is left
> as is by the patch is the PQfn() fast path interface and the large object
> functions that use it.
>
> Currently, calling lo_* functions inside a batch will fail with a message
> that depends on whether the internal lo_initialize() has been successfully
> called before.
>
> If it hasn't, PQerrorMessage() will be:
>"Synchronous command execution functions are not allowed in batch mode"
> which is fine, but it comes by happenstance from lo_initialize()
> calling PQexec() to fetch the function OIDs from pg_catalog.pg_proc.
>
> OTOH, if lo_initialize() has succeeded before, a call to a large
> object function will fail with a different message:
>   "connection in wrong state"
> which is emitted by PQfn() based on conn->asyncStatus != PGASYNC_IDLE
>
> Having an unified error message would be more user friendly.
>
>
Thanks for finding out this and yes, added a new check in PQfn() to give
the same error message - "Synchronous command execution functions are not
allowed in batch mode" when called in batch mode.



> Concerning the doc, when saying in 33.5.2:
>  "In batch mode only asynchronous operations are permitted"
> the server-side functions are indeed ruled out, since PQfn() is
> synchronous, but maybe we should be a bit more explicit
> about that?


> Chapter 34.3 (Large Objects / Client Interfaces) could also
> mention the incompatibility with batch mode.
>
>
Updated 33.5.2 to be more clear about what functions are allowed and what
are not allowed. Updated Chapter 33.3(Large Objects/ Client Interfaces) to
let the user know about the incompatibility with batch mode .


Attached the latest patch and here is the RT run result:
ok 1 - testlibpqbatch disallowed_in_batch
ok 2 - testlibpqbatch simple_batch
ok 3 - testlibpqbatch multi_batch
ok 4 - testlibpqbatch batch_abort
ok 5 - testlibpqbatch timings
ok 6 - testlibpqbatch copyfailure
ok 7 - testlibpqbatch test_singlerowmode
ok
All tests successful.
Files=1, Tests=7,  5 wallclock secs ( 0.01 usr  0.00 sys +  1.79 cusr  0.35
csys =  2.15 CPU)
Result: PASS

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v7.patch
Description: Binary data


0002-Pipelining-batch-support-for-libpq-test-v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-19 Thread Vaishnavi Prabakaran
On Fri, Mar 17, 2017 at 12:37 AM, Daniel Verite <dan...@manitou-mail.org>
wrote:

> Vaishnavi Prabakaran wrote:
>
> > So, attached the alternative fix for this issue.
> > Please share me your thoughts.
>
> I assume you prefer the alternative fix because it's simpler.
>

I would like add one more reason for this fix, I think "PQsetSingleRowMode"
should be called only when the result is ready to be processed and before
starting to consume result as it is documented currently as follows -
"To enter single-row mode, call PQsetSingleRowMode immediately after a
successful call of PQsendQuery (or a sibling function). This mode selection
is effective only for the currently executing query. Then call PQgetResult
repeatedly..."


I agree that first fix (you shared) will allow user to set single-row mode
after PQsendQuery, but it also allows user to set this mode at any time of
batch processing(not necessarily "immediately after PQsendQuery"), also
"mode selection is effective only for the currently executing query" will
be false. Please note that I don't see any problem with this deviation. I
like to outline that documentation here anyways needs an update/note.


 Before going further, I would like to mention that I have modified the
documentation of batch processing( in v6 code patch) as below:
"To enter single-row mode, call PQsetSingleRowMode immediately after a
successful call of PQbatchQueueProcess. This mode selection is effective
only for the currently executing query. For more information on the
use of PQsetSingleRowMode
, refer to Section 33.6, “Retrieving Query Results Row-By-Row”. "

Please let me know if you think this is not enough but wanted to update
section 33.6 also?



>
> > I would also like to hear Craig's opinion on it before applying this fix
> > to the original patch, just to make sure am not missing anything here.
>
> +1
>
> The main question is whether the predicates enforced
> by PQsetSingleRowMode() apply in batch mode in all cases
> when it's legit to call that function. Two predicates
> that may be problematic are:
> if (conn->asyncStatus != PGASYNC_BUSY)
> return 0;
> and
> if (conn->result)
> return 0;
>
> The general case with batch mode is that, from the doc:
> "The client interleaves result processing with sending batch queries"
>

While sending batch queries in middle of result processing, only the query
is appended to the list of queries maintained for batch processing and no
current connection attribute impacting result processing will be changed.
So, calling the PQsetSingleRowMode in-between result processing will fail
as it tries to set single-row mode for currently executing query for which
result processing is already started.


Note that I've not even tested that here,

I've tested
> batching a bunch of queries in a first step and getting the results
> in a second step.
> I am not confident that the above predicates will be true
> in all cases.

Also your alternative fix assumes that we add
> a user-visible exception to PQsetSingleRowMode in batch mode,
> whereby it must not be called as currently documented:
>   "call PQsetSingleRowMode immediately after a successful call of
>PQsendQuery (or a sibling function)"

My gut feeling is that it's not the right direction, I prefer making
> the single-row a per-query attribute internally and keep
> PQsetSingleRowMode's contract unchanged from the
> user's perspective.
>
>
Am going to include the test which you shared in the test patch. Please let
me know if you want to cover anymore specific cases to gain confidence.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-15 Thread Vaishnavi Prabakaran
On Tue, Mar 14, 2017 at 5:50 PM, Vaishnavi Prabakaran <
vaishnaviprabaka...@gmail.com> wrote:

>
>
> On Tue, Mar 14, 2017 at 4:19 AM, Daniel Verite <dan...@manitou-mail.org>
> wrote:
>
>>
>> I mean the next iteration of the above while statement. Referring
>> to the doc, that would be the "next batch entry":
>>
>>   " To get the result of the first batch entry the client must call
>>PQbatchQueueProcess. It must then call PQgetResult and handle the
>>results until PQgetResult returns null (or would return null if
>>called). The result from the next batch entry may then be retrieved
>>using PQbatchQueueProcess and the cycle repeated"
>>
>> Attached is a bare-bones testcase showing the problem.
>> As it is, it works, retrieving results for three "SELECT 1"
>> in the same batch.  Now in order to use the single-row
>> fetch mode, consider adding this:
>>
>> r = PQsetSingleRowMode(conn);
>> if (r!=1) {
>>   fprintf(stderr, "PQsetSingleRowMode() failed for i=%d\n", i);
>> }
>>
>> When inserted after the call to PQbatchQueueProcess,
>> which is what I understand you're saying works for you,
>> it fails for me when starting to get the results of the 2nd query
>> and after.
>>
>>
>
> Thanks for explaining the issue in detail and yes, I do see the issue
> using your attached test file.
>
> And I think the problem is with the "parseInput" call at the end of
> PQbatchQueueProcess().
> I don't see the need for "parseInput" to cover the scope of
> PQbatchQueueProcess(), also anyways we do it in PQgetResult().
> This function call changes the asyncstatus of connection to
> READY(following command complete message from backend), so eventually
> PQsetSingleRowMode() is failing. So, attached the alternative fix for this
> issue.
>
Please share me your thoughts.
>
> I would also like to hear Craig's opinion on it before applying this fix
> to the original patch, just to make sure am not missing anything here.
>
>
Attached the code patch applied with the fix explained above and moving the
CF status seeking review.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v6.patch
Description: Binary data


0002-Pipelining-batch-support-for-libpq-test-v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-14 Thread Vaishnavi Prabakaran
On Tue, Mar 14, 2017 at 4:19 AM, Daniel Verite 
wrote:

>
> I mean the next iteration of the above while statement. Referring
> to the doc, that would be the "next batch entry":
>
>   " To get the result of the first batch entry the client must call
>PQbatchQueueProcess. It must then call PQgetResult and handle the
>results until PQgetResult returns null (or would return null if
>called). The result from the next batch entry may then be retrieved
>using PQbatchQueueProcess and the cycle repeated"
>
> Attached is a bare-bones testcase showing the problem.
> As it is, it works, retrieving results for three "SELECT 1"
> in the same batch.  Now in order to use the single-row
> fetch mode, consider adding this:
>
> r = PQsetSingleRowMode(conn);
> if (r!=1) {
>   fprintf(stderr, "PQsetSingleRowMode() failed for i=%d\n", i);
> }
>
> When inserted after the call to PQbatchQueueProcess,
> which is what I understand you're saying works for you,
> it fails for me when starting to get the results of the 2nd query
> and after.
>
>

Thanks for explaining the issue in detail and yes, I do see the issue using
your attached test file.

And I think the problem is with the "parseInput" call at the end of
PQbatchQueueProcess().
I don't see the need for "parseInput" to cover the scope of
PQbatchQueueProcess(), also anyways we do it in PQgetResult().
This function call changes the asyncstatus of connection to READY(following
command complete message from backend), so eventually PQsetSingleRowMode()
is failing. So, attached the alternative fix for this issue.
Please share me your thoughts.

I would also like to hear Craig's opinion on it before applying this fix to
the original patch, just to make sure am not missing anything here.

Thanks & Regards,
Vaishnavi,
Fujitsu Australia.


0002-single-row-mode-fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-12 Thread Vaishnavi Prabakaran
On Sat, Mar 11, 2017 at 12:52 AM, Daniel Verite 
wrote:

>   Hi,
>
> I notice that PQsetSingleRowMode() doesn't work when getting batch results.
>
> The function is documented as:
> " int PQsetSingleRowMode(PGconn *conn);
>
>   This function can only be called immediately after PQsendQuery or one
>   of its sibling functions, before any other operation on the connection
>   such as PQconsumeInput or PQgetResult"
>
> But PQbatchQueueProcess() unconditionally clears conn->singleRowMode,
> so whatever it was when sending the query gets lost, and besides
> other queries might have been submitted in the meantime.
>
> Also if trying to set that mode when fetching like this
>
>  while (QbatchQueueProcess(conn)) {
>r = PQsetSingleRowMode(conn);
>if (r!=1) {
>   fprintf(stderr, "PQsetSingleRowMode() failed");
>}
>..
>
> it might work the first time, but on the next iterations, conn->asyncStatus
> might be PGASYNC_READY, which is a failure condition for
> PQsetSingleRowMode(), so that won't do.
>


Thanks for investigating the problem, and could you kindly explain what
"next iteration" you mean here? Because I don't see any problem in
following sequence of calls - PQbatchQueueProcess(),PQsetSingleRowMode()
, PQgetResult(). Am I missing anything?
Please note that it is MUST to call PQgetResult immediately after
PQbatchQueueProcess() as documented.



>
> ISTM that the simplest fix would be that when in batch mode,
> PQsetSingleRowMode() should register that the last submitted query
> does request that mode, and when later QbatchQueueProcess dequeues
> the batch entry for the corresponding query, this flag would be popped off
> and set as the current mode.
>
> Please find attached the suggested fix, against the v5 of the patch.
>

Before going with this fix, I would like you to consider the option of
asking batch processing users(via documentation) to set single-row mode
before calling PQgetResult().
Either way we need to fix the documentation part, letting users know how
they can activate single-row mode while using batch processing.
Let me know your thoughts.

Best Regards,
Vaishnavi,
Fujitsu Australia.


Re: [HACKERS] PATCH: Batch/pipelining support for libpq

2017-03-07 Thread Vaishnavi Prabakaran
On Wed, Mar 8, 2017 at 3:52 AM, Daniel Verite <dan...@manitou-mail.org>
wrote:

> Vaishnavi Prabakaran wrote:
>
> > Yes, I have created a new patch entry into the commitfest 2017-03 and
> > attached the latest patch with this e-mail.
>
> Please find attached a companion patch implementing the batch API in
> pgbench, exposed as \beginbatch and \endbatch meta-commands
> (without documentation).
>
> The idea for now is to make it easier to exercise the API and test
> how batching performs. I guess I'll submit the patch separately in
> a future CF, depending on when/if the libpq patch goes in.
>
>

Thanks for the companion patch and here are some comments:

1. I see, below check is used to verify if the connection is not in batch
mode:
if (PQbatchStatus(st->con) != PQBATCH_MODE_ON)

But, it is better to use if (PQbatchStatus(st->con) ==
PQBATCH_MODE_OFF) for this verification. Reason is there is one more state
in batch mode - PQBATCH_MODE_ABORTED. So, if the batch mode is in aborted
status, this check will still assume that the connection is not in batch
mode.

In a same way, "if(PQbatchStatus(st->con) == PQBATCH_MODE_ON)" check is
better to be modified as "PQbatchStatus(st->con) != PQBATCH_MODE_OFF".


2.  @@ -2207,7 +2227,47 @@ doCustom(TState *thread, CState *st, StatsData
*agg)
+ if (PQbatchStatus(st->con) != PQBATCH_MODE_OFF)
+ {
+ commandFailed(st, "already in batch mode");
+ break;
+ }

This is not required as below PQbatchBegin() internally does this
verification.

+ PQbatchBegin(st->con);


3. It is better to check the return value of PQbatchBegin() rather than
assuming success. E.g: PQbatchBegin() will return false if the connection
is in copy in/out/both states.



> While developing this, I noted a few things with 0001-v4:
>
> 1. lack of initialization for count in PQbatchQueueCount.
> Trivial fix:
>
> --- a/src/interfaces/libpq/fe-exec.c
> +++ b/src/interfaces/libpq/fe-exec.c
> @@ -1874,7 +1874,7 @@ PQisBusy(PGconn *conn)
>  int
>  PQbatchQueueCount(PGconn *conn)
>  {
> -   int count;
> +   int count = 0;
> PGcommandQueueEntry *entry;
>
>
Thanks for your review and yes, Corrected.


> 2. misleading error message in PQexecStart. It gets called by a few other
> functions than PQexec, such as PQprepare. As I understand it, the intent
> here is to forbid the synchronous functions in batch mode, so this error
> message should not single out PQexec.
>
> @@ -1932,6 +2425,13 @@ PQexecStart(PGconn *conn)
> if (!conn)
> return false;
>
> +   if (conn->asyncStatus == PGASYNC_QUEUED || conn->batch_status !=
> PQBATCH_MODE_OFF)
> +   {
> +   printfPQExpBuffer(>errorMessage,
> + libpq_gettext("cannot
> PQexec in batch mode\n"));
> +   return false;
> +   }
> +
>
>

Hmm, this error message goes with the flow of other error messages in the
same function. E.g: "PQexec not allowed during COPY BOTH" . But, anyways I
modified the
error message to be more generic.



> 3. In relation to #2, PQsendQuery() is not forbidden in batch mode
> although I don't think it can work with it, as it's based on the old
> protocol.
>
>
>
It is actually forbidden and you can see the error message "cannot
PQsendQuery in batch mode, use PQsendQueryParams" when you use
PQsendQuery().
Attached the updated patch.

Thanks & Regards,
Vaishnavi
Fujitsu Australia.


0001-Pipelining-batch-support-for-libpq-code-v5.patch
Description: Binary data


0002-Pipelining-batch-support-for-libpq-test-v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers