Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-07 Thread Kyotaro HORIGUCHI
At Tue, 07 Jun 2016 15:38:04 -0400, Tom Lane  wrote in 
<24181.1465328...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane  wrote in 
> > <17504.1465225...@sss.pgh.pa.us>
> >> Uh, what?  PQcancel is very carefully coded so that it's safe to use
> >> in a signal handler.  If it's doing mallocs someplace, that would be
> >> surprising.
> 
> > PQcancel is disigned to run in a signal handler on *Linux*, but
> > the discussion here is that the equivalent of send/recv and the
> > similars on Windows can be blocked by the TerminateThread'ed
> > thread via heap lock.
> 
> What's your evidence for that?  I'd expect those to be kernel calls,
> or whatever the equivalent concept is on Windows.  If they are not,
> and in particular if they do malloc's, I think we have got problems
> in many other contexts besides parallel dump.

Well, I found that I misunderstood your following sentence.

> Your original suggestion to use write_msg would end up going
> through fprintf, which might well use malloc internally.  (It's
> possible that Windows' version of write() could too, I suppose,
> but that's probably as low-level as we are going to get.)

I took the sentence enclosed by parentheses as that write() or
other syscalls may blocked by heap-lock on Windows. But it should
mean that "We have no way than to regard Windows' version of
write() as a kernel call, that is, heap-lock safe". It seems
quite natural and I totally agree to the judgment.

Consequently, I agree to just use write(), not write_msg() and
consider the combination of PQcancel and TerminateThread safe on
Windows.

Sorry for my confusion.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-07 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane  wrote in 
> <17504.1465225...@sss.pgh.pa.us>
>> Uh, what?  PQcancel is very carefully coded so that it's safe to use
>> in a signal handler.  If it's doing mallocs someplace, that would be
>> surprising.

> PQcancel is disigned to run in a signal handler on *Linux*, but
> the discussion here is that the equivalent of send/recv and the
> similars on Windows can be blocked by the TerminateThread'ed
> thread via heap lock.

What's your evidence for that?  I'd expect those to be kernel calls,
or whatever the equivalent concept is on Windows.  If they are not,
and in particular if they do malloc's, I think we have got problems
in many other contexts besides parallel dump.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-06 Thread Kyotaro HORIGUCHI
At Mon, 06 Jun 2016 11:12:14 -0400, Tom Lane  wrote in 
<17504.1465225...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane  wrote in 
> > <11515.1464961...@sss.pgh.pa.us>
> >> I think that this one:
> >>> If the target thread is allocating memory from the heap, the heap
> >>> lock will not be released.
> >> is potentially a hazard, which is why I made sure to use write_stderr
> >> later on in the console interrupt handler.  Your original suggestion
> >> to use write_msg would end up going through fprintf, which might well
> >> use malloc internally.  (It's possible that Windows' version of write()
> >> could too, I suppose, but that's probably as low-level as we are
> >> going to get.)
> 
> > I have to admit that I forgot about the possible malloc's, and
> > PQcancel() can be blocked from the same reason.
> 
> Uh, what?  PQcancel is very carefully coded so that it's safe to use
> in a signal handler.  If it's doing mallocs someplace, that would be
> surprising.

PQcancel is disigned to run in a signal handler on *Linux*, but
the discussion here is that the equivalent of send/recv and the
similars on Windows can be blocked by the TerminateThread'ed
thread via heap lock.

> > If the issue to be settled here is the unwanted error messages,
> > we could set a flag to instruct write_msg to sit silent. Anyway
> > the workers should have been dead that time so discarding any
> > error messages don't matter.
> > What do you think about this?
> 
> This is really ugly and I'm unconvinced that it fixes anything.
> write_msg is hardly the only place in a worker thread that might
> be doing malloc's; moreover, preventing workers from entering it
> after we start a shutdown does nothing for workers that might be
> in it already.

Yeah, it's ugly. But if we assume write() can be blocked, the
similar system calls used within PQcancel can be blocked, too. If
we don't assume so, using write instead of write_msg would do.

The problem I think is we don't have (or they don't offer?)
enough knowlegde about the inside of Windows APIs.


regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-06 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane  wrote in 
> <11515.1464961...@sss.pgh.pa.us>
>> I think that this one:
>>> If the target thread is allocating memory from the heap, the heap
>>> lock will not be released.
>> is potentially a hazard, which is why I made sure to use write_stderr
>> later on in the console interrupt handler.  Your original suggestion
>> to use write_msg would end up going through fprintf, which might well
>> use malloc internally.  (It's possible that Windows' version of write()
>> could too, I suppose, but that's probably as low-level as we are
>> going to get.)

> I have to admit that I forgot about the possible malloc's, and
> PQcancel() can be blocked from the same reason.

Uh, what?  PQcancel is very carefully coded so that it's safe to use
in a signal handler.  If it's doing mallocs someplace, that would be
surprising.

> If the issue to be settled here is the unwanted error messages,
> we could set a flag to instruct write_msg to sit silent. Anyway
> the workers should have been dead that time so discarding any
> error messages don't matter.
> What do you think about this?

This is really ugly and I'm unconvinced that it fixes anything.
write_msg is hardly the only place in a worker thread that might
be doing malloc's; moreover, preventing workers from entering it
after we start a shutdown does nothing for workers that might be
in it already.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-06 Thread Kyotaro HORIGUCHI
How about silencing the workers on termination?

# Build on Windows (with VC?) is very time consuming...

At Fri, 03 Jun 2016 09:44:30 -0400, Tom Lane  wrote in 
<11515.1464961...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > For sure, any of the "dangers" of TerminateThread don't matter
> > for this case.
> 
> I think that this one:
> 
> >> If the target thread is allocating memory from the heap, the heap
> >> lock will not be released.
> 
> is potentially a hazard, which is why I made sure to use write_stderr
> later on in the console interrupt handler.  Your original suggestion
> to use write_msg would end up going through fprintf, which might well
> use malloc internally.  (It's possible that Windows' version of write()
> could too, I suppose, but that's probably as low-level as we are
> going to get.)

I have to admit that I forgot about the possible malloc's, and
PQcancel() can be blocked from the same reason. What is worse,
even if we managed to avoid this particular disaster, the MSDN
page says the problems are *for example*. So if we don't allow
any possible unknown blocking on ctrl-C termination and want to
forcibly terminate workers, we might have to use process instead
of thread for worker entity. (This might reduce the difference
related to WIN32..)

We also might be able to cause immediate process termination for
the second Ctrl-C but this seems to lead to other issues.

If the issue to be settled here is the unwanted error messages,
we could set a flag to instruct write_msg to sit silent. Anyway
the workers should have been dead that time so discarding any
error messages don't matter.

What do you think about this?


Timeout for select still seems to be needless.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index fb48a02..b74a396 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -147,7 +147,11 @@ static DWORD tls_index;
 
 /* globally visible variables (needed by exit_nicely) */
 bool		parallel_init_done = false;
-DWORD		mainThreadId;
+bool		is_terminating = false;
+static DWORD		mainThreadId;
+static bool			handler_active = false;
+static DWORD		handlerThreadId;
+
 #endif   /* WIN32 */
 
 static const char *modulename = gettext_noop("parallel archiver");
@@ -180,9 +184,26 @@ static char *readMessageFromPipe(int fd);
 
 
 /*
- * Shutdown callback to clean up socket access
+ * Return true if I am the main thread
+ *
+ * This function regards the console handler thread as the main thread. We
+ * need a separate boolean for validity of the handlerThreadId since an
+ * invalid value for thread id is not defined.
  */
 #ifdef WIN32
+bool
+am_main_thread(void)
+{
+  DWORD current_thread_id = GetCurrentThreadId();
+
+  return (mainThreadId == current_thread_id ||
+		  (handler_active &&
+		   handlerThreadId == current_thread_id));
+}
+
+/*
+ * Shutdown callback to clean up socket access
+ */
 static void
 shutdown_parallel_dump_utils(int code, void *unused)
 {
@@ -190,7 +211,7 @@ shutdown_parallel_dump_utils(int code, void *unused)
 	if (mainThreadId == GetCurrentThreadId())
 		WSACleanup();
 }
-#endif
+#endif /* ifdef WIN32 */
 
 /*
  * Initialize parallel dump support --- should be called early in process
@@ -390,6 +411,8 @@ ShutdownWorkersHard(ParallelState *pstate)
 	 * On Windows, send query cancels directly to the workers' backends.  Use
 	 * a critical section to ensure worker threads don't change state.
 	 */
+	is_terminating = true; /* Silence workers */
+
 	EnterCriticalSection(_info_lock);
 	for (i = 0; i < pstate->numWorkers; i++)
 	{
@@ -602,6 +625,15 @@ consoleHandler(DWORD dwCtrlType)
 	if (dwCtrlType == CTRL_C_EVENT ||
 		dwCtrlType == CTRL_BREAK_EVENT)
 	{
+	  	/* 
+		 * We don't forcibly terminate workes. Instead, these
+		 * variables make them silent ever after. This handler thread
+		 * is regarded as the main thread.
+		 */
+		is_terminating = true;
+		handlerThreadId = GetCurrentThreadId();
+		handler_active = true;
+
 		/* Critical section prevents changing data we look at here */
 		EnterCriticalSection(_info_lock);
 
@@ -642,6 +674,8 @@ consoleHandler(DWORD dwCtrlType)
 		write_msg(NULL, "terminated by user\n");
 	}
 
+	handler_active = false;
+
 	/* Always return FALSE to allow signal handling to continue */
 	return FALSE;
 }
diff --git a/src/bin/pg_dump/parallel.h b/src/bin/pg_dump/parallel.h
index 21739ca..59fba33 100644
--- a/src/bin/pg_dump/parallel.h
+++ b/src/bin/pg_dump/parallel.h
@@ -64,11 +64,11 @@ typedef struct ParallelState
 
 #ifdef WIN32
 extern bool parallel_init_done;
-extern DWORD mainThreadId;
+extern bool is_terminating;
 #endif
 
 extern void init_parallel_dump_utils(void);
-
+extern bool am_main_thread(void);
 extern int	GetIdleWorker(ParallelState *pstate);
 extern bool IsEveryWorkerIdle(ParallelState *pstate);
 extern void ListenToWorkers(ArchiveHandle *AH, ParallelState 

Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-03 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> For sure, any of the "dangers" of TerminateThread don't matter
> for this case.

I think that this one:

>> If the target thread is allocating memory from the heap, the heap
>> lock will not be released.

is potentially a hazard, which is why I made sure to use write_stderr
later on in the console interrupt handler.  Your original suggestion
to use write_msg would end up going through fprintf, which might well
use malloc internally.  (It's possible that Windows' version of write()
could too, I suppose, but that's probably as low-level as we are
going to get.)

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-03 Thread Kyotaro HORIGUCHI
At Thu, 2 Jun 2016 12:17:11 -0400, Alvaro Herrera  
wrote in <20160602161711.GA239156@alvherre.pgsql>
> Tom Lane wrote:
> > Kyotaro HORIGUCHI  writes:
> > > Apart from the invalid snapshot problem, I looked the patch
> > > previously mentioned mainly for Windows.
> > 
> > Thanks for looking!
> > 
> > > Even though the threads started by beginthread cannot be
> > > terminated cleanly from outside, but the whole process will soon
> > > terminate anyway, so we could use TreminateThread. This seems
> > > working. (Attached patch)
> > 
> > Seems reasonable to me; I was unhappy about the lack of any direct
> > equivalent to the child SIGTERMs that the Unix code does.

For sure, any of the "dangers" of TerminateThread don't matter
for this case.

https://msdn.microsoft.com/en-us/em-us/library/windows/desktop/ms686717(v=vs.85).aspx

> If the target thread owns a critical section, the critical
> section will not be released.
> 
> If the target thread is allocating memory from the heap, the heap
> lock will not be released.
> 
> If the target thread is executing certain kernel32 calls when it
> is terminated, the kernel32 state for the thread's process could
> be inconsistent.
> 
> If the target thread is manipulating the global state of a shared
> DLL, the state of the DLL could be destroyed, affecting other
> users of the DLL.



> Given this testing, it's clear that the timeout on select() is useless;
> we could get rid of it in vacuumdb.c too.  I'll post a patch later.

Agreed. Command pipes may be in blocking mode for the case.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-02 Thread Alvaro Herrera
Tom Lane wrote:
> Kyotaro HORIGUCHI  writes:
> > Apart from the invalid snapshot problem, I looked the patch
> > previously mentioned mainly for Windows.
> 
> Thanks for looking!
> 
> > Even though the threads started by beginthread cannot be
> > terminated cleanly from outside, but the whole process will soon
> > terminate anyway, so we could use TreminateThread. This seems
> > working. (Attached patch)
> 
> Seems reasonable to me; I was unhappy about the lack of any direct
> equivalent to the child SIGTERMs that the Unix code does.

Given this testing, it's clear that the timeout on select() is useless;
we could get rid of it in vacuumdb.c too.  I'll post a patch later.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-02 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> Apart from the invalid snapshot problem, I looked the patch
> previously mentioned mainly for Windows.

Thanks for looking!

> Even though the threads started by beginthread cannot be
> terminated cleanly from outside, but the whole process will soon
> terminate anyway, so we could use TreminateThread. This seems
> working. (Attached patch)

Seems reasonable to me; I was unhappy about the lack of any direct
equivalent to the child SIGTERMs that the Unix code does.

> Is there any reason for the name "ourAH" not to be "myAH"?

Don't much care, I'll change it.

> setup_cancel_handler looks somewhat bizarre. It eventually works
> only for the main process/thread and does nothing for workers. It
> is enough to be run once before forking in ParalleBackupStart and
> that makes handler_set unnecessary.

No, because we also want it to work in non-parallel cases.  As coded,
we'll establish the handler whenever set_archive_cancel_info is first
called, which will be in the first ConnectDatabase() call.  It would
be possible to do that someplace else maybe, but it would require
more code changes than just attaching it to set_archive_cancel_info.

> In EndDBCopyMode, the result of PQgetResult is abandoned. This
> can leak memory and such usage is not seen elsewhere in the
> source tree. Shouldn't we hold the result and PQclear it? (Mainly
> as a convention, not for benefit.)

There should never be any non-null result, so that seems like a waste
of code to me.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-02 Thread Kyotaro HORIGUCHI
Apart from the invalid snapshot problem, I looked the patch
previously mentioned mainly for Windows.

At Tue, 31 May 2016 12:29:50 -0400, Tom Lane  wrote in 
<7445.1464712...@sss.pgh.pa.us>
> In the patch I posted yesterday, I reversed the order of those two
> steps, which should fix this problem in most scenarios:
> https://www.postgresql.org/message-id/7005.1464657...@sss.pgh.pa.us

Entering ctrl-C while parallel pg_dump is running, on my console
I saw a repetition of the following error message, which is not
seen on Linux thanks to forcible termination of worker processes
in sigTermHandler().

> pg_dump: Dumping the contents of table "t" failed: PQgetResult() failed.
> pg_dump: Error message from server: ERROR:  canceling statement due to user 
> request
> pg_dump: The command was: COPY public.t (a) TO stdout;

We could provide a global flag (wantAbort?) that inform the
canceling of queries but it needs adding checks for it
everywhere.

Even though the threads started by beginthread cannot be
terminated cleanly from outside, but the whole process will soon
terminate anyway, so we could use TreminateThread. This seems
working. (Attached patch) We might be allowed to do exit() in
colsoleHandler().


Other questions follow.

Is there any reason for the name "ourAH" not to be "myAH"?

setup_cancel_handler looks somewhat bizarre. It eventually works
only for the main process/thread and does nothing for workers. It
is enough to be run once before forking in ParalleBackupStart and
that makes handler_set unnecessary.

In EndDBCopyMode, the result of PQgetResult is abandoned. This
can leak memory and such usage is not seen elsewhere in the
source tree. Shouldn't we hold the result and PQclear it? (Mainly
as a convention, not for benefit.)

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index cf97d9c..fb48a02 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -618,6 +618,12 @@ consoleHandler(DWORD dwCtrlType)
 			{
 ArchiveHandle *AH = signal_info.pstate->parallelSlot[i].args->AH;
 
+/*
+ * Using TerminateThread here may leave some resources leaked
+ * but whole this process will soon die.
+ */
+TerminateThread(signal_info.pstate->parallelSlot[i].hThread, 0);
+
 if (AH != NULL && AH->connCancel != NULL)
 	(void) PQcancel(AH->connCancel, errbuf, sizeof(errbuf));
 			}
@@ -632,6 +638,8 @@ consoleHandler(DWORD dwCtrlType)
 			errbuf, sizeof(errbuf));
 
 		LeaveCriticalSection(_info_lock);
+
+		write_msg(NULL, "terminated by user\n");
 	}
 
 	/* Always return FALSE to allow signal handling to continue */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-06-01 Thread Kyotaro HORIGUCHI
At Tue, 31 May 2016 12:29:50 -0400, Tom Lane  wrote in 
<7445.1464712...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > At Fri, 27 May 2016 13:20:20 -0400, Tom Lane  wrote in 
> > <14603.1464369...@sss.pgh.pa.us>
> >> Kyotaro HORIGUCHI  writes:
> >>> By the way, the reason of the "invalid snapshot identifier" is
> >>> that some worker threads try to use it after the connection on
> >>> the first worker closed.
> 
> >> ... BTW, I don't quite see what the issue is there.
> 
> > The master session died from lack of libz and the failure of
> > compressLevel's propagation already fixed. Some of the children
> > that started transactions after the master's death will get the
> > error.
> 
> I don't think I believe that theory, because it would require the master
> to not notice the lack of libz before it launches worker processes, but
> instead while the workers are working.

The master actually *didn't* notice the lack of libz until it
launces worker processes before cae2bb1. So the current master
don't suffer the problem, but it is not desirable that sudden
death from any reason of a child causes this kind of behavior.

>  But AFAICS, while there are worker
> processes open, the master does nothing except wait for workers and
> dispatch new jobs to them; it does no database work of its own.  So the
> libz-isn't-there error has to have occurred in one of the workers.

Yes, the firstly-commanded worker dies from that then the master
disconencts its connection owning the snapshot before terminating
any other workers. It occurs with the current master (9ee56df)
minus cae2bb1, having --without-zlib at configure.

> > If we want prevent it perfectly, one solution could be that
> > non-master children explicitly wait the master to arrive at the
> > "safe" state before starting their transactions. But I suppose it
> > is not needed here.
> 
> Actually, I believe the problem is in archive_close_connection, around
> line 295 in HEAD: once the master realizes that one child has failed,
> it first closes its own database connection and only second tries to kill
> the remaining children.  So there's a race condition wherein remaining
> children have time to see the missing-snapshot error.

Agreed.

> In the patch I posted yesterday, I reversed the order of those two
> steps, which should fix this problem in most scenarios:
> https://www.postgresql.org/message-id/7005.1464657...@sss.pgh.pa.us

Yeah, just transposing DisconnectDatabase and ShutdownWorkersHard
in archive_close_connection fixed the problem.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-31 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> At Fri, 27 May 2016 13:20:20 -0400, Tom Lane  wrote in 
> <14603.1464369...@sss.pgh.pa.us>
>> Kyotaro HORIGUCHI  writes:
>>> By the way, the reason of the "invalid snapshot identifier" is
>>> that some worker threads try to use it after the connection on
>>> the first worker closed.

>> ... BTW, I don't quite see what the issue is there.

> The master session died from lack of libz and the failure of
> compressLevel's propagation already fixed. Some of the children
> that started transactions after the master's death will get the
> error.

I don't think I believe that theory, because it would require the master
to not notice the lack of libz before it launches worker processes, but
instead while the workers are working.  But AFAICS, while there are worker
processes open, the master does nothing except wait for workers and
dispatch new jobs to them; it does no database work of its own.  So the
libz-isn't-there error has to have occurred in one of the workers.

> If we want prevent it perfectly, one solution could be that
> non-master children explicitly wait the master to arrive at the
> "safe" state before starting their transactions. But I suppose it
> is not needed here.

Actually, I believe the problem is in archive_close_connection, around
line 295 in HEAD: once the master realizes that one child has failed,
it first closes its own database connection and only second tries to kill
the remaining children.  So there's a race condition wherein remaining
children have time to see the missing-snapshot error.

In the patch I posted yesterday, I reversed the order of those two
steps, which should fix this problem in most scenarios:
https://www.postgresql.org/message-id/7005.1464657...@sss.pgh.pa.us

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-31 Thread Kyotaro HORIGUCHI
Hello,

At Fri, 27 May 2016 13:20:20 -0400, Tom Lane  wrote in 
<14603.1464369...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > By the way, the reason of the "invalid snapshot identifier" is
> > that some worker threads try to use it after the connection on
> > the first worker closed.
> 
> ... BTW, I don't quite see what the issue is there.  The snapshot is
> exported from the master session, so errors in worker sessions should not
> cause such failures in other workers.  And I don't see any such failure
> when setting up a scenario that will cause a worker to fail on Linux.
> The "invalid snapshot identifier" bleats would make sense if you had
> gotten a server-side error (and transaction abort) in the master session,
> but I don't see any evidence that that happened in that example.  Might be
> worth seeing if that's reproducible.

The master session died from lack of libz and the failure of
compressLevel's propagation already fixed. Some of the children
that started transactions after the master's death will get the
error.

Similary, sudden close of the session of the master child at very
early in its transaction could cause the same symptom but it
seems not likely if master surely arrives at command-waiting, or
"safe", state.

If we want prevent it perfectly, one solution could be that
non-master children explicitly wait the master to arrive at the
"safe" state before starting their transactions. But I suppose it
is not needed here.

Does this make sense?

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-28 Thread Tom Lane
Amit Kapila  writes:
> On Sat, May 28, 2016 at 5:06 AM, Tom Lane  wrote:
>> It looks like the vacuumdb.c version of this code actually is tied
>> into an interrupt handler, but whoever copied it for parallel.c just
>> ripped out the CancelRequested checks, making the looping behavior
>> pretty useless.

> It seems to me that CancelRequested checks were introduced in vacuumdb.c as
> part of commit a1792320 and select_loop for parallel.c version exists from
> commit 9e257a18 which got committed earlier.

Huh, interesting.  I wonder how parallel.c's select_loop got to be like
that then?  The git history offers no clue: it is essentially the same as
HEAD as far back as the initial commit of parallel.c.  It certainly looks
like someone intended to introduce a cancel check and never did, or had
one and took it out without simplifying the rest of the logic.

Anyway, AFAICS the time limit on select() wait is completely useless in
the code as it stands; but we'll likely want to add a cancel check there,
so ripping it out wouldn't be a good plan.  I'll change the added comment.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Amit Kapila
On Sat, May 28, 2016 at 5:06 AM, Tom Lane  wrote:
>
> Alvaro Herrera  writes:
> > Regarding this:
> >> + * XXX this certainly looks useless, why not just wait
indefinitely?
>
> > There's another select_loop() in vacuumdb.c suggesting that the timeout
> > is used to check for cancel requests; as I understood while working on
> > the vacuumdb changes, select() is not interrupted in that case.  I
> > suppose that means it's necessary here also.  But on the other hand it's
> > quite possible that the original developer just copied what was in
> > pg_dump and that it's not actually needed; if that's the case, perhaps
> > it's better to rip it out from both places.
>
> Ah, interesting.  That ties into something else I was wondering about,
> which is how we could get useful control-C cancellation on Windows.
> It looks like the vacuumdb.c version of this code actually is tied
> into an interrupt handler, but whoever copied it for parallel.c just
> ripped out the CancelRequested checks, making the looping behavior
> pretty useless.
>

It seems to me that CancelRequested checks were introduced in vacuumdb.c as
part of commit a1792320 and select_loop for parallel.c version exists from
commit 9e257a18 which got committed earlier.  I think control-C handling
for Windows in parallel.c is missing or if there is some way to deal with
it, clearly it is not same as what we do in vacuumdb.c.

> For pg_restore (parallel or not) it would be useful if the program
> didn't just fall over at control-C but actually sent cancel requests
> to the backend(s).  It's not such a problem if we're transferring
> data, but if we're waiting for some slow operation like CREATE INDEX,
> the current behavior isn't very good.  On the Unix side we have some
> SIGINT infrastructure there already, but I don't see any for Windows.
>
> So now I'm thinking we should leave that alone, with the expectation
> that we'll be putting CancelRequested checks back in at some point.
>
> >
https://www.postgresql.org/message-id/20150122174601.gb1...@alvh.no-ip.org
>
> Hmm, did the patch you're discussing there get committed?
>

Yes, it was committed - a1792320



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Tom Lane
Alvaro Herrera  writes:
> Regarding this:
>> + * XXX this certainly looks useless, why not just wait 
>> indefinitely?

> There's another select_loop() in vacuumdb.c suggesting that the timeout
> is used to check for cancel requests; as I understood while working on
> the vacuumdb changes, select() is not interrupted in that case.  I
> suppose that means it's necessary here also.  But on the other hand it's
> quite possible that the original developer just copied what was in
> pg_dump and that it's not actually needed; if that's the case, perhaps
> it's better to rip it out from both places.

Ah, interesting.  That ties into something else I was wondering about,
which is how we could get useful control-C cancellation on Windows.
It looks like the vacuumdb.c version of this code actually is tied
into an interrupt handler, but whoever copied it for parallel.c just
ripped out the CancelRequested checks, making the looping behavior
pretty useless.

For pg_restore (parallel or not) it would be useful if the program
didn't just fall over at control-C but actually sent cancel requests
to the backend(s).  It's not such a problem if we're transferring
data, but if we're waiting for some slow operation like CREATE INDEX,
the current behavior isn't very good.  On the Unix side we have some
SIGINT infrastructure there already, but I don't see any for Windows.

So now I'm thinking we should leave that alone, with the expectation
that we'll be putting CancelRequested checks back in at some point.

> https://www.postgresql.org/message-id/20150122174601.gb1...@alvh.no-ip.org

Hmm, did the patch you're discussing there get committed?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Alvaro Herrera
Regarding this:

> *** select_loop(int maxFd, fd_set *workerset
> *** 1092,1104 
> --- 1150,1160 
>   fd_set  saveSet = *workerset;
>   
>   #ifdef WIN32
>   for (;;)
>   {
>   /*
>* sleep a quarter of a second before checking if we should 
> terminate.
> +  * XXX this certainly looks useless, why not just wait 
> indefinitely?
>*/
>   struct timeval tv = {0, 25};
>   

There's another select_loop() in vacuumdb.c suggesting that the timeout
is used to check for cancel requests; as I understood while working on
the vacuumdb changes, select() is not interrupted in that case.  I
suppose that means it's necessary here also.  But on the other hand it's
quite possible that the original developer just copied what was in
pg_dump and that it's not actually needed; if that's the case, perhaps
it's better to rip it out from both places.

https://www.postgresql.org/message-id/20150122174601.gb1...@alvh.no-ip.org

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Tom Lane
I wrote:
>> BTW, after having spent the afternoon staring at it, I will assert with
>> confidence that pg_dump/parallel.c is an embarrassment to the project.

> I've done a bit of work on a cosmetic cleanup patch, but don't have
> anything to show yet.

Attached is the threatened cosmetic cleanup of parallel.c.  As I went
through it, I found quite a few things not to like, but in this patch
I've mostly resisted the temptation to fix them immediately, and have
just tried to document what's there more accurately.

Aside from a major amount of comment-rewriting and a very small amount of
cosmetic code adjustment (mostly moving code for more clarity), this
patch changes these things:

* Rename SetupWorker() to RunWorker() to reflect what it actually does,
and remove its unused "worker" argument.

* Rename lockTableNoWait() to lockTableForWorker() for clarity, and move
the test for BLOBS into it instead of having an Assert that the caller
checked that.

* Don't bother with getThreadLocalPQExpBuffer() at all in non-Windows
builds; it was identical to getLocalPQExpBuffer() anyway, except for
being misleadingly named.

* Remove some completely-redundant or otherwise ill-considered Asserts.

* Fix incorrect "Assert(msgsize <= bufsize)" --- should be < bufsize.

* Fix missing socket close in one error exit from pgpipe().  This isn't
too exciting at present since we'll just curl up and die if it fails,
but might as well get it right.

I have some other, less-cosmetic, things I want to do here, but first
does anyone care to review this?

regards, tom lane

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index c656ba5..1a52fae 100644
*** a/src/bin/pg_dump/parallel.c
--- b/src/bin/pg_dump/parallel.c
***
*** 2,21 
   *
   * parallel.c
   *
!  *	Parallel support for the pg_dump archiver
   *
   * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
   * Portions Copyright (c) 1994, Regents of the University of California
   *
-  *	The author is not responsible for loss or damages that may
-  *	result from its use.
-  *
   * IDENTIFICATION
   *		src/bin/pg_dump/parallel.c
   *
   *-
   */
  
  #include "postgres_fe.h"
  
  #include "parallel.h"
--- 2,62 
   *
   * parallel.c
   *
!  *	Parallel support for pg_dump and pg_restore
   *
   * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
   * Portions Copyright (c) 1994, Regents of the University of California
   *
   * IDENTIFICATION
   *		src/bin/pg_dump/parallel.c
   *
   *-
   */
  
+ /*
+  * Parallel operation works like this:
+  *
+  * The original, master process calls ParallelBackupStart(), which forks off
+  * the desired number of worker processes, which each enter WaitForCommands().
+  *
+  * The master process dispatches an individual work item to one of the worker
+  * processes in DispatchJobForTocEntry().  That calls
+  * AH->MasterStartParallelItemPtr, a routine of the output format.  This
+  * function's arguments are the parents archive handle AH (containing the full
+  * catalog information), the TocEntry that the worker should work on and a
+  * T_Action value indicating whether this is a backup or a restore task.  The
+  * function simply converts the TocEntry assignment into a command string that
+  * is then sent over to the worker process. In the simplest case that would be
+  * something like "DUMP 1234", with 1234 being the TocEntry id.
+  *
+  * The worker process receives and decodes the command and passes it to the
+  * routine pointed to by AH->WorkerJobDumpPtr or AH->WorkerJobRestorePtr,
+  * which are routines of the current archive format.  That routine performs
+  * the required action (dump or restore) and returns a malloc'd status string.
+  * The status string is passed back to the master where it is interpreted by
+  * AH->MasterEndParallelItemPtr, another format-specific routine.  That
+  * function can update state or catalog information on the master's side,
+  * depending on the reply from the worker process.  In the end it returns a
+  * status code, which is 0 for successful execution.
+  *
+  * Remember that we have forked off the workers only after we have read in
+  * the catalog.  That's why our worker processes can also access the catalog
+  * information.  (In the Windows case, the workers are threads in the same
+  * process.  To avoid problems, they work with cloned copies of the Archive
+  * data structure; see init_spawned_worker_win32().)
+  *
+  * In the master process, the workerStatus field for each worker has one of
+  * the following values:
+  *		WRKR_IDLE: it's waiting for a command
+  *		WRKR_WORKING: it's been sent a command
+  *		WRKR_FINISHED: it's returned a result
+  *		WRKR_TERMINATED: process ended
+  * The FINISHED state indicates that the worker is idle, but 

Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> By the way, the reason of the "invalid snapshot identifier" is
> that some worker threads try to use it after the connection on
> the first worker closed.

... BTW, I don't quite see what the issue is there.  The snapshot is
exported from the master session, so errors in worker sessions should not
cause such failures in other workers.  And I don't see any such failure
when setting up a scenario that will cause a worker to fail on Linux.
The "invalid snapshot identifier" bleats would make sense if you had
gotten a server-side error (and transaction abort) in the master session,
but I don't see any evidence that that happened in that example.  Might be
worth seeing if that's reproducible.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Kyotaro HORIGUCHI
At Thu, 26 May 2016 10:53:47 -0400, Tom Lane  wrote in 
<5237.1464274...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> >> Next, I got the following behavior for the following command,
> >> then freeze. Maybe stopping at the same point with the next
> >> paragraph but I'm not sure. The same thing occurs this patch on
> >> top of the current master but doesn't on Linux.
> 
> > [ need to close command sockets in ShutdownWorkersHard ]
> 
> Hah.  I had made that same change in the cosmetic-cleanups patch I was
> working on ... but I assumed it wasn't a live bug or we'd have heard
> about it.
> 
> Pushed along with the comment improvements I'd made to that function.

Thank you!

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-27 Thread Kyotaro HORIGUCHI
At Thu, 26 May 2016 12:15:29 -0400, Tom Lane  wrote in 
<8273.1464279...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > At Wed, 25 May 2016 10:11:28 -0400, Tom Lane  wrote in 
> > <24577.1464185...@sss.pgh.pa.us>
> >> The only case
> >> that is certain to work is switches before non-switch arguments, and so
> >> we should not give any documentation examples in the other order.  On a
> >> platform using GNU getopt(), getopt() will rearrange the argv[] array to
> >> make such cases work, but non-GNU getopt() doesn't do that (and I don't
> >> really trust the GNU version not to get confused, either).
> 
> > Yeah, I knew it after reading port/getopt_long.c. But the error
> > message seems saying nothing about that (at least to me). And
> > those accumstomed to the GNU getopt's behavior will fail to get
> > the point of the message. The documentation also doesn't
> > explicitly saying about the restriction; it is only implicitly
> > shown in synopsis. How about something like the following
> > message? (The patch attached looks too dependent on the specific
> > behavior of our getopt_long.c, but doing it in getopt_long.c also
> > seems to be wrong..)
> 
> It's not a bad idea to try to improve our response to this situation,
> but I think this patch needs more work.  I wonder in particular why
> you're not basing the variant error message on the first unprocessed
> argument starting with '-', rather than looking at the word before it.

Sorry, it just comes from my carelessness. It is correct to do
what you wrote as above. And it is also the cause of my obscure
error message.

> Another thought is that the message is fairly unhelpful because it does
> not show where in the arguments things went wrong.  Maybe something
> more like
> 
>   if (argv[optind][0] == '-')
>   fprintf(stderr, _("%s: misplaced option \"%s\": options must 
> appear before non-option arguments\n"),
>   progname, argv[optind]);
>   else
>   // existing message
> 
> In any case, if we wanted to do something about this scenario, we should
> do it consistently across all our programs, not just pg_dump.  I count
> 25 appearances of that "too many command-line arguments" message...

Although I suppose no one has ever complained before about that,
and the same thing will occur on the other new programs even if
the programs are fixed now..

I'll consider more on it for some time..

Thank you for the suggestion.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-26 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> At Wed, 25 May 2016 10:11:28 -0400, Tom Lane  wrote in 
> <24577.1464185...@sss.pgh.pa.us>
>> The only case
>> that is certain to work is switches before non-switch arguments, and so
>> we should not give any documentation examples in the other order.  On a
>> platform using GNU getopt(), getopt() will rearrange the argv[] array to
>> make such cases work, but non-GNU getopt() doesn't do that (and I don't
>> really trust the GNU version not to get confused, either).

> Yeah, I knew it after reading port/getopt_long.c. But the error
> message seems saying nothing about that (at least to me). And
> those accumstomed to the GNU getopt's behavior will fail to get
> the point of the message. The documentation also doesn't
> explicitly saying about the restriction; it is only implicitly
> shown in synopsis. How about something like the following
> message? (The patch attached looks too dependent on the specific
> behavior of our getopt_long.c, but doing it in getopt_long.c also
> seems to be wrong..)

It's not a bad idea to try to improve our response to this situation,
but I think this patch needs more work.  I wonder in particular why
you're not basing the variant error message on the first unprocessed
argument starting with '-', rather than looking at the word before it.
Another thought is that the message is fairly unhelpful because it does
not show where in the arguments things went wrong.  Maybe something
more like

if (argv[optind][0] == '-')
fprintf(stderr, _("%s: misplaced option \"%s\": options must 
appear before non-option arguments\n"),
progname, argv[optind]);
else
// existing message

In any case, if we wanted to do something about this scenario, we should
do it consistently across all our programs, not just pg_dump.  I count
25 appearances of that "too many command-line arguments" message...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-26 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
>> Next, I got the following behavior for the following command,
>> then freeze. Maybe stopping at the same point with the next
>> paragraph but I'm not sure. The same thing occurs this patch on
>> top of the current master but doesn't on Linux.

> [ need to close command sockets in ShutdownWorkersHard ]

Hah.  I had made that same change in the cosmetic-cleanups patch I was
working on ... but I assumed it wasn't a live bug or we'd have heard
about it.

Pushed along with the comment improvements I'd made to that function.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-26 Thread Kyotaro HORIGUCHI
> Sounds reasonable. I look into this further.

I looked into that and found one problem in the patch.

> Next, I got the following behavior for the following command,
> then freeze. Maybe stopping at the same point with the next
> paragraph but I'm not sure. The same thing occurs this patch on
> top of the current master but doesn't on Linux.

This occurs in the following steps.

1. One of the workers dies from some reason.
   (InitCompressorZlib immediately goes into exit_horribly in this case)

2. The main thread detects in ListenToWorkers that the worker is dead.

3. ListenToWorkers calls exit_horribly then exit_nicely

4. exit_nicely calls archive_close_connection as a callback then
   the callback calls ShutdownWorkersHard

5. ShutdownWorkersHard should close the write side of the pipe
   but the patch skips it for WIN32.

So, the attached patch on top the patch fixes that, that is,
pg_dump returns to command prompt even for the case.

By the way, the reason of the "invalid snapshot identifier" is
that some worker threads try to use it after the connection on
the first worker closed. Some of the workers succesfully did
before the connection closing and remained listening to their
master to inhibit termination of the entire process.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index f650d3f..6c08426 100644
--- a/src/bin/pg_dump/parallel.c
+++ b/src/bin/pg_dump/parallel.c
@@ -308,7 +308,6 @@ checkAborting(ArchiveHandle *AH)
 static void
 ShutdownWorkersHard(ParallelState *pstate)
 {
-#ifndef WIN32
 	int			i;
 
 	/*
@@ -318,6 +317,7 @@ ShutdownWorkersHard(ParallelState *pstate)
 	for (i = 0; i < pstate->numWorkers; i++)
 		closesocket(pstate->parallelSlot[i].pipeWrite);
 
+#ifndef WIN32
 	for (i = 0; i < pstate->numWorkers; i++)
 		kill(pstate->parallelSlot[i].pid, SIGTERM);
 #else

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-25 Thread Kyotaro HORIGUCHI
At Wed, 25 May 2016 10:11:28 -0400, Tom Lane  wrote in 
<24577.1464185...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > I tried it on Windows 7/64 but first of all, I'm surprised to see
> > that the following command line gets an error but it would be
> > fine because it is consistent with what is written in the manual.
> 
> > | >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f 
> > testdump
> > | pg_dump: too many command-line arguments (first is "--jobs=9")
> > | Try "pg_dump --help" for more information.
> 
> Where do you see an example like that?  We should fix it.

Sorry for the confusing sentence. The command line came from your
first mail starting this thread:p And "consistent with manual"
means that the command line results in an error (even only on
Windows) since it is difrerent from the document. No such example
has been seen in the documentation AFAICS.

https://www.postgresql.org/message-id/2458.1450894...@sss.pgh.pa.us
https://www.postgresql.org/docs/9.6/static/app-pgdump.html

>  The only case
> that is certain to work is switches before non-switch arguments, and so
> we should not give any documentation examples in the other order.  On a
> platform using GNU getopt(), getopt() will rearrange the argv[] array to
> make such cases work, but non-GNU getopt() doesn't do that (and I don't
> really trust the GNU version not to get confused, either).

Yeah, I knew it after reading port/getopt_long.c. But the error
message seems saying nothing about that (at least to me). And
those accumstomed to the GNU getopt's behavior will fail to get
the point of the message. The documentation also doesn't
explicitly saying about the restriction; it is only implicitly
shown in synopsis. How about something like the following
message? (The patch attached looks too dependent on the specific
behavior of our getopt_long.c, but doing it in getopt_long.c also
seems to be wrong..)

| >pg_dump hoge -f
| pg_dump: non-option arguments should not precede options.


> > I might be wrong with something, but pg_dump seems to have some
> > issues in thread handling.
> 
> Wouldn't surprise me ... there's a lot of code in there depending on
> static variables, and we've only tried to thread-proof a few.  Still,
> I think that's a separate issue from what this patch is addressing.

Sounds reasonable. I look into this further.
I see no other functional problem in this patch.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 1267afb..52e9094 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -546,9 +546,13 @@ main(int argc, char **argv)
 	/* Complain if any arguments remain */
 	if (optind < argc)
 	{
-		fprintf(stderr, _("%s: too many command-line arguments (first is \"%s\")\n"),
-progname, argv[optind]);
-		fprintf(stderr, _("Try \"%s --help\" for more information.\n"),
+		if (optind > 0 && argv[optind - 1][0] != '-')
+			fprintf(stderr, _("%s: non-option arguments should not precede options.\n"),
+	progname);
+		else
+			fprintf(stderr, _("%s: too many command-line arguments (first is \"%s\")\n"),
+	progname, argv[optind]);
+		fprintf(stderr, _("Non-Try \"%s --help\" for more information.\n"),
 progname);
 		exit_nicely(1);
 	}

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-25 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> I tried it on Windows 7/64 but first of all, I'm surprised to see
> that the following command line gets an error but it would be
> fine because it is consistent with what is written in the manual.

> | >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f 
> testdump
> | pg_dump: too many command-line arguments (first is "--jobs=9")
> | Try "pg_dump --help" for more information.

Where do you see an example like that?  We should fix it.  The only case
that is certain to work is switches before non-switch arguments, and so
we should not give any documentation examples in the other order.  On a
platform using GNU getopt(), getopt() will rearrange the argv[] array to
make such cases work, but non-GNU getopt() doesn't do that (and I don't
really trust the GNU version not to get confused, either).

> I might be wrong with something, but pg_dump seems to have some
> issues in thread handling.

Wouldn't surprise me ... there's a lot of code in there depending on
static variables, and we've only tried to thread-proof a few.  Still,
I think that's a separate issue from what this patch is addressing.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-25 Thread Kyotaro HORIGUCHI
At Wed, 25 May 2016 00:15:57 -0400, Tom Lane  wrote in 
<3149.1464149...@sss.pgh.pa.us>
> Kyotaro HORIGUCHI  writes:
> > Shouldn't archive_close_connection have an assertion (si->AHX !=
> > NULL) instead of checking "if (si->AHX)" in each path?
> 
> I thought about that but didn't see any really good argument for it.
> It'd be making that function dependent on the current behavior of
> unrelated code, when it doesn't need to be.

It's also fine with me.

I tried it on Windows 7/64 but first of all, I'm surprised to see
that the following command line gets an error but it would be
fine because it is consistent with what is written in the manual.

| >pg_dump postgres://horiguti:hoge@localhost/postgres --jobs=9 -Fd -f testdump
| pg_dump: too many command-line arguments (first is "--jobs=9")
| Try "pg_dump --help" for more information.


Next, I got the following behavior for the following command,
then freeze. Maybe stopping at the same point with the next
paragraph but I'm not sure. The same thing occurs this patch on
top of the current master but doesn't on Linux.

| >pg_dump -d postgres --jobs=9 -Fd -f testdump
| Password: 
| pg_dump: [archiver] WARNING: requested compression not available in this 
installation -- archive will be uncompressed
| pg_dump: [compress_io] not built with zlib support
| pg_dump: [parallel archiver] a worker process died unexpectedly
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'
| pg_dump: [archiver (db)] query failed: ERROR:  invalid snapshot identifier: 
"2299-1"
| pg_dump: [archiver (db)] query was: SET TRANSACTION SNAPSHOT '2299-1'


Third, I'm not sure on this detail, but pg_dump shows the
following message then freeze until Ctrl-C. I think that I forgot
to set password to the user for the time. It doesn't seem to
occur for this patch on top of the current master.

| >pg_dump --jobs=9 -Fd -f testdump 
"postgres://horiguti:hoge@localhost/postgres"
| pg_dump: [archiver] WARNING: requested compression not available in this 
installation -- archive will be uncompressed
| pg_dump: [compress_io] not built with zlib support
| pg_dump: [parallel archiver] a worker process died unexpectedly
| ^C
| >

The main thread was stopping at WaitForMultiplObjects() around
parallel.c:361(@master + this patch) but I haven't investigated
it any more.


Finally, I got the following expected result, which seems sane.

| >pg_dump --jobs=9 -Fd -f testdump 
"postgres://horiguti:hoge@localhost/postgres"
| pg_dump: [archiver] WARNING: requested compression not available in this 
installation -- archive will be uncompressed
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [parallel archiver] a worker process died unexpectedly
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied
| pg_dump: [archiver (db)] connection to database "postgres" failed: 
fe_sendauth:no password supplied


I might be wrong with something, but pg_dump seems to have some
issues in thread handling.

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-24 Thread Tom Lane
Kyotaro HORIGUCHI  writes:
> Shouldn't archive_close_connection have an assertion (si->AHX !=
> NULL) instead of checking "if (si->AHX)" in each path?

I thought about that but didn't see any really good argument for it.
It'd be making that function dependent on the current behavior of
unrelated code, when it doesn't need to be.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-24 Thread Kyotaro HORIGUCHI
Hello,

# Note for convenience for others: The commit fixing the original
# issue is 1aa41e3eae3746e05d0e23286ac740a9a6cee7df.

At Mon, 23 May 2016 13:40:30 -0400, Tom Lane  wrote in 
<1336.1464025...@sss.pgh.pa.us>
> I wrote:
> >> ... the pg_dump process has crashed with a SIGPIPE without printing
> >> any message whatsoever, and without coming anywhere near finishing the
> >> dump.
> 
> > Attached is a proposed patch for this.  It reverts exit_horribly() to
> > what it used to be pre-9.3, ie just print the message on stderr and die.
> > The master now notices child failure by observing EOF on the status pipe.
> > While that works automatically on Unix, we have to explicitly close the
> > child side of the pipe on Windows (could someone check that that works?).
> > I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
> > just die if it was in the midst of sending to the child.
> 
> Now that we're all back from PGCon ... does anyone wish to review this?
> Or have an objection to treating it as a bug fix and patching all branches
> back to 9.3?

FWIW, it seems to me a bug which should be fixed to back
branches.

I tried this only on Linux (I'll try it on Windows afterward) and
got something like this.

pg_dump: [archiver (db)] pg_dump: [archiver (db)] connection to database 
"postgres" failed: fe_sendauth: no password supplied
connection to database "postgres" failed: fe_sendauth: no password supplied
pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: 
no password supplied

pg_dump: [parallel archiver] a worker process died unexpectedly
pg_dump: [archiver (db)] connection to database "postgres" failed: fe_sendauth: 
no password supplied



Although the error messages from multiple children are cluttered,
it would be inevitable and better than vanishing.

It seems hard to distinguish the meaning among the module names
enclosed by '[]' but it is another issue.


In archive_close_connection, the following change means that
si->AHX could be NULL there, as the existing code for
non-parallel does. But it seems to be set once for
si(=shutdown_info)'s lifetime, before reaching there, to a valid
value.

-   DisconnectDatabase(si->AHX);
+   if (si->AHX)
+   DisconnectDatabase(si->AHX);

Shouldn't archive_close_connection have an assertion (si->AHX !=
NULL) instead of checking "if (si->AHX)" in each path?

> > BTW, after having spent the afternoon staring at it, I will assert with
> > confidence that pg_dump/parallel.c is an embarrassment to the project.
> 
> I've done a bit of work on a cosmetic cleanup patch, but don't have
> anything to show yet.
> 
>   regards, tom lane

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-23 Thread Tom Lane
I wrote:
>> ... the pg_dump process has crashed with a SIGPIPE without printing
>> any message whatsoever, and without coming anywhere near finishing the
>> dump.

> Attached is a proposed patch for this.  It reverts exit_horribly() to
> what it used to be pre-9.3, ie just print the message on stderr and die.
> The master now notices child failure by observing EOF on the status pipe.
> While that works automatically on Unix, we have to explicitly close the
> child side of the pipe on Windows (could someone check that that works?).
> I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
> just die if it was in the midst of sending to the child.

Now that we're all back from PGCon ... does anyone wish to review this?
Or have an objection to treating it as a bug fix and patching all branches
back to 9.3?

> BTW, after having spent the afternoon staring at it, I will assert with
> confidence that pg_dump/parallel.c is an embarrassment to the project.

I've done a bit of work on a cosmetic cleanup patch, but don't have
anything to show yet.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2016-05-14 Thread Tom Lane
[ getting back to a complaint I made in December ]

I wrote:
> ... the pg_dump process has crashed with a SIGPIPE without printing
> any message whatsoever, and without coming anywhere near finishing the
> dump.

> A bit of investigation says that this is because somebody had the bright
> idea that worker processes could report fatal errors back to the master
> process instead of just printing them to stderr.  So when the workers
> fail to establish connections (because of the password problem cited in
> #13727), they don't tell me about that.  Oh no, they send those errors
> back up to the pipe to the parent, and then die silently.  Meanwhile,
> the parent is trying to send them commands, and since it doesn't protect
> itself against SIGPIPE on the command pipes, it crashes without ever
> reporting anything.  If you aren't paying close attention, you wouldn't
> even realize you didn't get a completed dump.

Attached is a proposed patch for this.  It reverts exit_horribly() to
what it used to be pre-9.3, ie just print the message on stderr and die.
The master now notices child failure by observing EOF on the status pipe.
While that works automatically on Unix, we have to explicitly close the
child side of the pipe on Windows (could someone check that that works?).
I also fixed the parent side to ignore SIGPIPE earlier, so that it won't
just die if it was in the midst of sending to the child.

BTW, after having spent the afternoon staring at it, I will assert with
confidence that pg_dump/parallel.c is an embarrassment to the project.
It is chock-full of misleading, out-of-date, and/or outright wrong
comments, useless or even wrong Asserts, ill-designed APIs, code that
works quite differently in Unix and Windows cases without a shred of
commentary about the fact, etc etc.  I have mostly resisted the temptation
to do cosmetic cleanup in the attached, but this code really needs some
editorial attention.

regards, tom lane

diff --git a/src/bin/pg_dump/parallel.c b/src/bin/pg_dump/parallel.c
index 9167294..f650d3f 100644
*** a/src/bin/pg_dump/parallel.c
--- b/src/bin/pg_dump/parallel.c
*** static ShutdownInformation shutdown_info
*** 77,84 
  static const char *modulename = gettext_noop("parallel archiver");
  
  static ParallelSlot *GetMyPSlot(ParallelState *pstate);
- static void parallel_msg_master(ParallelSlot *slot, const char *modulename,
- 	const char *fmt, va_list ap) pg_attribute_printf(3, 0);
  static void archive_close_connection(int code, void *arg);
  static void ShutdownWorkersHard(ParallelState *pstate);
  static void WaitForTerminatingWorkers(ParallelState *pstate);
--- 77,82 
*** GetMyPSlot(ParallelState *pstate)
*** 163,227 
  }
  
  /*
-  * Fail and die, with a message to stderr.  Parameters as for write_msg.
-  *
-  * This is defined in parallel.c, because in parallel mode, things are more
-  * complicated. If the worker process does exit_horribly(), we forward its
-  * last words to the master process. The master process then does
-  * exit_horribly() with this error message itself and prints it normally.
-  * After printing the message, exit_horribly() on the master will shut down
-  * the remaining worker processes.
-  */
- void
- exit_horribly(const char *modulename, const char *fmt,...)
- {
- 	va_list		ap;
- 	ParallelState *pstate = shutdown_info.pstate;
- 	ParallelSlot *slot;
- 
- 	va_start(ap, fmt);
- 
- 	if (pstate == NULL)
- 	{
- 		/* Not in parallel mode, just write to stderr */
- 		vwrite_msg(modulename, fmt, ap);
- 	}
- 	else
- 	{
- 		slot = GetMyPSlot(pstate);
- 
- 		if (!slot)
- 			/* We're the parent, just write the message out */
- 			vwrite_msg(modulename, fmt, ap);
- 		else
- 			/* If we're a worker process, send the msg to the master process */
- 			parallel_msg_master(slot, modulename, fmt, ap);
- 	}
- 
- 	va_end(ap);
- 
- 	exit_nicely(1);
- }
- 
- /* Sends the error message from the worker to the master process */
- static void
- parallel_msg_master(ParallelSlot *slot, const char *modulename,
- 	const char *fmt, va_list ap)
- {
- 	char		buf[512];
- 	int			pipefd[2];
- 
- 	pipefd[PIPE_READ] = slot->pipeRevRead;
- 	pipefd[PIPE_WRITE] = slot->pipeRevWrite;
- 
- 	strcpy(buf, "ERROR ");
- 	vsnprintf(buf + strlen("ERROR "),
- 			  sizeof(buf) - strlen("ERROR "), fmt, ap);
- 
- 	sendMessageToMaster(pipefd, buf);
- }
- 
- /*
   * A thread-local version of getLocalPQExpBuffer().
   *
   * Non-reentrant but reduces memory leakage. (On Windows the memory leakage
--- 161,166 
*** getThreadLocalPQExpBuffer(void)
*** 271,277 
  
  /*
   * pg_dump and pg_restore register the Archive pointer for the exit handler
!  * (called from exit_horribly). This function mainly exists so that we can
   * keep shutdown_info in file scope only.
   */
  void
--- 210,216 
  
  /*
   * pg_dump and pg_restore register the Archive pointer for the exit handler
!  * (called from exit_nicely). This function mainly 

Re: [HACKERS] Parallel pg_dump's error reporting doesn't work worth squat

2015-12-23 Thread Joshua D. Drake

On 12/23/2015 10:16 AM, Tom Lane wrote:


Depending on timing, this scheme might accidentally fail to fail, but it
seems fragile as can be.  I would bet that it's prone to deadlocks, quite
aside from the SIGPIPE problem.  Considering how amazingly ugly the
underlying code is (exit_horribly is in parallel.c now? Really?), I want
to rip it out entirely, not try to band-aid it by suppressing SIGPIPE ---
though likely we need to do that too.

Thoughts?


This is something that should work, period. It should be a showcase for 
the code we ship because it shows how serious we take data integrity 
(backups/restore etc...).


+1 for turning it into something irrefutably as close to perfect as 
humans can produce.


Sincerely,

JD


--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers