On December 2014 17:31 Amit Kapila Wrote,

>I suggest rather than removing, edit the comment to indicate
>the idea behind code at that place.

>Okay, I think this part of code is somewhat similar to what
>is done in pg_dump/parallel.c with some differences related
>to handling of inAbort.  One thing I have noticed here which
>could lead to a problem is that caller of select_loop() function
>assumes that return value is less than zero only if there is a
>cancel request which I think is wrong, because select system
>call could also return -1 in case of error.  I am referring below
>code in above context:
+ if (i < 0)
+ {
+ /*
+  * This can only happen if user has sent the cancel request using
+  * Ctrl+C, Cancel is handled by 0th slot, so fetch the error result.
+  */
+ GetQueryResult(pSlot[0].connection, dbname, progname,
+    completedb);

Now for abort case I am using special error code, and other than that case we 
will assert, this behavior is same as in pg_dump.

>Hmm, theoretically I think new behaviour could lead to more I/O in
>certain cases as compare to existing behaviour.  The reason for more I/O
>is that in the new behaviour, while doing Analyze for a particular table at
>different targets, in-between it has Analyze of different table as well,
>so the pages in shared buffers or OS cache for a particular table needs to
>be reloded again for a new target whereas currently it will do all stages
>of Analyze for a particular table in one-go which means that each stage
>of Analyze could get benefit from the pages of a table loaded by previous
>stage.  If you agree, then we should try to avoid this change in new

I will work on this comment and post the updated patch..

I will move this patch to the latest commitfest.


Attachment: vacuumdb_parallel_v20.patch
Description: vacuumdb_parallel_v20.patch

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to