Re: [HACKERS] Deadlock with pg_dump?

2007-03-03 Thread Bruce Momjian

I have applied a modified version of this patch, attached.   I trimmed
down the description of log_lock_waits to be more concise, and moved the
idea of using this to tune deadlock_timeout to the deadlock_timeout
section of the manual.

---

Simon Riggs wrote:
 On Mon, 2007-02-19 at 19:38 +, Simon Riggs wrote:
  On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
   Simon Riggs wrote:
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
 Chris Campbell [EMAIL PROTECTED] writes:
  Is there additional logging information I can turn on to get more  
  details? I guess I need to see exactly what locks both processes  
  hold, and what queries they were running when the deadlock 
  occurred?  
  Is that easily done, without turning on logging for *all* 
  statements?
 
 log_min_error_statement = error would at least get you the statements
 reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.
   
   Old email, but I don't see how our current output is not good enough?
   
 test= lock a;
 ERROR:  deadlock detected
 DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
 database 16384; blocked by process 6795.
 Process 6795 waits for AccessExclusiveLock on relation 16396 of database
 16384; blocked by process 6855.
  
  This detects deadlocks, but it doesn't detect lock waits. 
  
  When I wrote that it was previous experience driving me. Recent client
  experience has highlighted the clear need for this. We had a lock wait
  of 50 hours because of an RI check; thats the kind of thing I'd like to
  show up in the logs somewhere.
  
  Lock wait detection can be used to show up synchronisation points that
  have been inadvertently designed into an application, so its a useful
  tool in investigating performance issues.
  
  I have a patch implementing the logging as agreed with Tom, will post to
  patches later tonight.
 
 Patch for discussion, includes doc entries at top of patch, so its
 fairly clear how it works.
 
 Output is an INFO message, to allow this to trigger
 log_min_error_statement when it generates a message, to allow us to see
 the SQL statement that is waiting. This allows it to generate a message
 prior to the statement completing, which is important because it may not
 ever complete, in some cases, so simply logging a list of pids won't
 always tell you what the SQL was that was waiting.
 
 Other approaches are possible...
 
 Comments?
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 

[ Attachment, skipping... ]

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: doc/src/sgml/config.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.113
diff -c -c -r1.113 config.sgml
*** doc/src/sgml/config.sgml	2 Mar 2007 23:37:22 -	1.113
--- doc/src/sgml/config.sgml	3 Mar 2007 18:41:13 -
***
*** 2946,2951 
--- 2946,2966 
/listitem
   /varlistentry
  
+  varlistentry id=guc-log-lock-waits xreflabel=log_lock_waits
+   termvarnamelog_lock_waits/varname (typeboolean/type)/term
+   indexterm
+primaryvarnamelog_lock_waits/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+ Controls whether a log message is produced when a statement waits
+ longer than xref linkend=guc-deadlock-timeout to acquire a
+ lock.  This is useful in determining if lock waits are causing
+ poor performance.  The default is literaloff/.
+/para
+   /listitem
+  /varlistentry
+ 
   varlistentry id=guc-log-temp-files xreflabel=log_temp_files
termvarnamelog_temp_files/varname (typeinteger/type)/term
indexterm
***
*** 3980,3996 
  This is the amount of time, in milliseconds, to wait on a lock
  before checking to see if there is a deadlock 

Re: [HACKERS] Deadlock with pg_dump?

2007-03-02 Thread Bruce Momjian

I will rework this before application to use LOG level.

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Simon Riggs wrote:
 On Mon, 2007-02-19 at 19:38 +, Simon Riggs wrote:
  On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
   Simon Riggs wrote:
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
 Chris Campbell [EMAIL PROTECTED] writes:
  Is there additional logging information I can turn on to get more  
  details? I guess I need to see exactly what locks both processes  
  hold, and what queries they were running when the deadlock 
  occurred?  
  Is that easily done, without turning on logging for *all* 
  statements?
 
 log_min_error_statement = error would at least get you the statements
 reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.
   
   Old email, but I don't see how our current output is not good enough?
   
 test= lock a;
 ERROR:  deadlock detected
 DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
 database 16384; blocked by process 6795.
 Process 6795 waits for AccessExclusiveLock on relation 16396 of database
 16384; blocked by process 6855.
  
  This detects deadlocks, but it doesn't detect lock waits. 
  
  When I wrote that it was previous experience driving me. Recent client
  experience has highlighted the clear need for this. We had a lock wait
  of 50 hours because of an RI check; thats the kind of thing I'd like to
  show up in the logs somewhere.
  
  Lock wait detection can be used to show up synchronisation points that
  have been inadvertently designed into an application, so its a useful
  tool in investigating performance issues.
  
  I have a patch implementing the logging as agreed with Tom, will post to
  patches later tonight.
 
 Patch for discussion, includes doc entries at top of patch, so its
 fairly clear how it works.
 
 Output is an INFO message, to allow this to trigger
 log_min_error_statement when it generates a message, to allow us to see
 the SQL statement that is waiting. This allows it to generate a message
 prior to the statement completing, which is important because it may not
 ever complete, in some cases, so simply logging a list of pids won't
 always tell you what the SQL was that was waiting.
 
 Other approaches are possible...
 
 Comments?
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 

[ Attachment, skipping... ]

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Bruce Momjian

I am a little concerned about a log_* setting that is INFO. I understand
why you used INFO (for log_min_error_messages), but INFO is inconsistent
with the log* prefix, and by default INFO doesn't appear in the log
file.

So, by default, the INFO is going to go to the user terminal, and not to
the logfile.

Ideas?

---

Simon Riggs wrote:
 On Mon, 2007-02-19 at 19:38 +, Simon Riggs wrote:
  On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
   Simon Riggs wrote:
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
 Chris Campbell [EMAIL PROTECTED] writes:
  Is there additional logging information I can turn on to get more  
  details? I guess I need to see exactly what locks both processes  
  hold, and what queries they were running when the deadlock 
  occurred?  
  Is that easily done, without turning on logging for *all* 
  statements?
 
 log_min_error_statement = error would at least get you the statements
 reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.
   
   Old email, but I don't see how our current output is not good enough?
   
 test= lock a;
 ERROR:  deadlock detected
 DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
 database 16384; blocked by process 6795.
 Process 6795 waits for AccessExclusiveLock on relation 16396 of database
 16384; blocked by process 6855.
  
  This detects deadlocks, but it doesn't detect lock waits. 
  
  When I wrote that it was previous experience driving me. Recent client
  experience has highlighted the clear need for this. We had a lock wait
  of 50 hours because of an RI check; thats the kind of thing I'd like to
  show up in the logs somewhere.
  
  Lock wait detection can be used to show up synchronisation points that
  have been inadvertently designed into an application, so its a useful
  tool in investigating performance issues.
  
  I have a patch implementing the logging as agreed with Tom, will post to
  patches later tonight.
 
 Patch for discussion, includes doc entries at top of patch, so its
 fairly clear how it works.
 
 Output is an INFO message, to allow this to trigger
 log_min_error_statement when it generates a message, to allow us to see
 the SQL statement that is waiting. This allows it to generate a message
 prior to the statement completing, which is important because it may not
 ever complete, in some cases, so simply logging a list of pids won't
 always tell you what the SQL was that was waiting.
 
 Other approaches are possible...
 
 Comments?
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 

[ Attachment, skipping... ]

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Simon Riggs
On Mon, 2007-02-26 at 13:34 -0500, Bruce Momjian wrote:

 I am a little concerned about a log_* setting that is INFO. I understand
 why you used INFO (for log_min_error_messages), but INFO is inconsistent
 with the log* prefix, and by default INFO doesn't appear in the log
 file.

Yeh, LOG would be most appropriate, but thats not possible.

log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
INFO, NOTICE and WARNING for non-error states.

Possibly DEBUG1?

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 Yeh, LOG would be most appropriate, but thats not possible.

You have not given any good reason for that.

 log_min_messages allows only DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1,
 INFO, NOTICE and WARNING for non-error states.

I don't think you understand quite how the log message priority works...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Simon Riggs
On Mon, 2007-02-26 at 14:11 -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  Yeh, LOG would be most appropriate, but thats not possible.
 
 You have not given any good reason for that.

The idea of the patch is that it generates a log message which then
invokes log_min_error_statement so that the SQL statement is displayed.
LOG is not on the list of options there, otherwise I would use it.

The reason for behaving like this is so that a message is generated
while the statement is still waiting, rather than at the end. As I
mentioned in the submission, you may not like that behaviour; I'm in two
minds myself, but I'm trying to get to the stage of having useful
information come out of the server when we have long lock waits.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 The idea of the patch is that it generates a log message which then
 invokes log_min_error_statement so that the SQL statement is displayed.
 LOG is not on the list of options there, otherwise I would use it.

As I said, you don't understand how the logging priority control works.
LOG *is* the appropriate level for stuff intended to go to the server log.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Simon Riggs
On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  The idea of the patch is that it generates a log message which then
  invokes log_min_error_statement so that the SQL statement is displayed.
  LOG is not on the list of options there, otherwise I would use it.
 
 As I said, you don't understand how the logging priority control works.
 LOG *is* the appropriate level for stuff intended to go to the server log.

Please look at the definition of log_min_error_statement, so you
understand where I'm coming from.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Tom Lane
Simon Riggs [EMAIL PROTECTED] writes:
 On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
 The idea of the patch is that it generates a log message which then
 invokes log_min_error_statement so that the SQL statement is displayed.
 LOG is not on the list of options there, otherwise I would use it.
 
 As I said, you don't understand how the logging priority control works.
 LOG *is* the appropriate level for stuff intended to go to the server log.

 Please look at the definition of log_min_error_statement, so you
 understand where I'm coming from.

I *have* read the definition of log_min_error_statement.  (The SGML docs
are wrong btw, as a quick look at the code shows that LOG is an accepted
value.)

The real issue here is that send_message_to_server_log just does

if (edata-elevel = log_min_error_statement  debug_query_string != 
NULL)

to determine whether to log the statement, whereas arguably it should be
using a test like is_log_level_output --- that is, the priority ordering
for log_min_error_statement should be like log_min_messages not like
client_min_messages.  We've discussed that before in another thread, but
it looks like nothing's been done yet.  In any case, if you're unhappy
with the code's choice of whether to emit the STATEMENT part of a log
message, some changes here are what's indicated, not bizarre choices of
elevel for individual messages.

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [HACKERS] Deadlock with pg_dump?

2007-02-26 Thread Simon Riggs
On Mon, 2007-02-26 at 14:52 -0500, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Mon, 2007-02-26 at 14:28 -0500, Tom Lane wrote:
  Simon Riggs [EMAIL PROTECTED] writes:
  The idea of the patch is that it generates a log message which then
  invokes log_min_error_statement so that the SQL statement is displayed.
  LOG is not on the list of options there, otherwise I would use it.
  
  As I said, you don't understand how the logging priority control works.
  LOG *is* the appropriate level for stuff intended to go to the server log.
 
  Please look at the definition of log_min_error_statement, so you
  understand where I'm coming from.
 
 I *have* read the definition of log_min_error_statement.  (The SGML docs
 are wrong btw, as a quick look at the code shows that LOG is an accepted
 value.)

OK, I should have looked passed the manual.

 The real issue here is that send_message_to_server_log just does
 
   if (edata-elevel = log_min_error_statement  debug_query_string != 
 NULL)
 
 to determine whether to log the statement, whereas arguably it should be
 using a test like is_log_level_output --- that is, the priority ordering
 for log_min_error_statement should be like log_min_messages not like
 client_min_messages.  We've discussed that before in another thread, but
 it looks like nothing's been done yet.  

Hopefully not with me? Don't remember that.

 In any case, if you're unhappy
 with the code's choice of whether to emit the STATEMENT part of a log
 message, some changes here are what's indicated, not bizarre choices of
 elevel for individual messages.

Well, I would have chosen LOG if I thought it was available.

I'll do some more to the patch.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Deadlock with pg_dump?

2007-02-19 Thread Simon Riggs
On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
 Simon Riggs wrote:
  On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
   Chris Campbell [EMAIL PROTECTED] writes:
Is there additional logging information I can turn on to get more  
details? I guess I need to see exactly what locks both processes  
hold, and what queries they were running when the deadlock occurred?  
Is that easily done, without turning on logging for *all* statements?
   
   log_min_error_statement = error would at least get you the statements
   reporting the deadlocks, though not what they're conflicting against.
  
  Yeh, we need a much better locking logger for performance analysis.
  
  We really need to dump the whole wait-for graph for deadlocks, since
  this might be more complex than just two statements involved. Deadlocks
  ought to be so infrequent that we can afford the log space to do this -
  plus if we did this it would likely lead to fewer deadlocks.
  
  For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
  that would allow you to dump the wait-for graph for any data-level locks
  that wait too long, rather than just those that deadlock. Many
  applications experience heavy locking because of lack of holistic
  design. That will also show up the need for other utilities to act
  CONCURRENTLY, if possible.
 
 Old email, but I don't see how our current output is not good enough?
 
   test= lock a;
   ERROR:  deadlock detected
   DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
   database 16384; blocked by process 6795.
   Process 6795 waits for AccessExclusiveLock on relation 16396 of database
   16384; blocked by process 6855.

This detects deadlocks, but it doesn't detect lock waits. 

When I wrote that it was previous experience driving me. Recent client
experience has highlighted the clear need for this. We had a lock wait
of 50 hours because of an RI check; thats the kind of thing I'd like to
show up in the logs somewhere.

Lock wait detection can be used to show up synchronisation points that
have been inadvertently designed into an application, so its a useful
tool in investigating performance issues.

I have a patch implementing the logging as agreed with Tom, will post to
patches later tonight.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Deadlock with pg_dump?

2007-02-19 Thread Simon Riggs
On Mon, 2007-02-19 at 19:38 +, Simon Riggs wrote:
 On Tue, 2007-02-13 at 22:19 -0500, Bruce Momjian wrote:
  Simon Riggs wrote:
   On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
Chris Campbell [EMAIL PROTECTED] writes:
 Is there additional logging information I can turn on to get more  
 details? I guess I need to see exactly what locks both processes  
 hold, and what queries they were running when the deadlock occurred?  
 Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.
   
   Yeh, we need a much better locking logger for performance analysis.
   
   We really need to dump the whole wait-for graph for deadlocks, since
   this might be more complex than just two statements involved. Deadlocks
   ought to be so infrequent that we can afford the log space to do this -
   plus if we did this it would likely lead to fewer deadlocks.
   
   For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
   that would allow you to dump the wait-for graph for any data-level locks
   that wait too long, rather than just those that deadlock. Many
   applications experience heavy locking because of lack of holistic
   design. That will also show up the need for other utilities to act
   CONCURRENTLY, if possible.
  
  Old email, but I don't see how our current output is not good enough?
  
  test= lock a;
  ERROR:  deadlock detected
  DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
  database 16384; blocked by process 6795.
  Process 6795 waits for AccessExclusiveLock on relation 16396 of database
  16384; blocked by process 6855.
 
 This detects deadlocks, but it doesn't detect lock waits. 
 
 When I wrote that it was previous experience driving me. Recent client
 experience has highlighted the clear need for this. We had a lock wait
 of 50 hours because of an RI check; thats the kind of thing I'd like to
 show up in the logs somewhere.
 
 Lock wait detection can be used to show up synchronisation points that
 have been inadvertently designed into an application, so its a useful
 tool in investigating performance issues.
 
 I have a patch implementing the logging as agreed with Tom, will post to
 patches later tonight.

Patch for discussion, includes doc entries at top of patch, so its
fairly clear how it works.

Output is an INFO message, to allow this to trigger
log_min_error_statement when it generates a message, to allow us to see
the SQL statement that is waiting. This allows it to generate a message
prior to the statement completing, which is important because it may not
ever complete, in some cases, so simply logging a list of pids won't
always tell you what the SQL was that was waiting.

Other approaches are possible...

Comments?

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com

Index: doc/src/sgml/config.sgml
===
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/config.sgml,v
retrieving revision 1.108
diff -c -r1.108 config.sgml
*** doc/src/sgml/config.sgml	1 Feb 2007 00:28:16 -	1.108
--- doc/src/sgml/config.sgml	6 Feb 2007 12:31:49 -
***
*** 2936,2941 
--- 2936,2965 
/listitem
   /varlistentry
  
+  varlistentry id=guc-log-lock-waits xreflabel=log_lock_waits
+   termvarnamelog_lock_waits/varname (typeboolean/type)/term
+   indexterm
+primaryvarnamelog_lock_waits/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+ 		Controls whether log messages are produced when a statement is forced
+ 		to wait when trying to acquire locks on database objects. The threshold
+ 		time is the value of the xref linkend=guc-deadlock-timeout parameter.
+ 		The log messages generated are intended for use during specific 
+ 		investigations into application performance issues and subsequent tuning.
+ 		It is designed for use in conjunction with varnamelog_min_error_statement/.
+ 		Log messages indicating long lock waits might indicate problems with
+ 		applications accessing the database or possibly disconnection issues. 
+ 		If no such problem exist it might indicate that varnamedeadlock_timeout/
+ 		could be set higher. Log messages might also indicate that certain
+ 		deadlocks have been avoided. In those cases, decreasing the value of 
+ 		varnamedeadlock_timeout/ might resolve lock wait situations faster,
+ 		thereby reducing contention. By default, this form of logging is literaloff/.
+/para
+   /listitem
+  /varlistentry
+ 
   /variablelist
  /sect2
 /sect1
Index: src/backend/storage/lmgr/deadlock.c
===
RCS file: /projects/cvsroot/pgsql/src/backend/storage/lmgr/deadlock.c,v
retrieving revision 

Re: [HACKERS] Deadlock with pg_dump?

2007-02-13 Thread Bruce Momjian
Simon Riggs wrote:
 On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
  Chris Campbell [EMAIL PROTECTED] writes:
   Is there additional logging information I can turn on to get more  
   details? I guess I need to see exactly what locks both processes  
   hold, and what queries they were running when the deadlock occurred?  
   Is that easily done, without turning on logging for *all* statements?
  
  log_min_error_statement = error would at least get you the statements
  reporting the deadlocks, though not what they're conflicting against.
 
 Yeh, we need a much better locking logger for performance analysis.
 
 We really need to dump the whole wait-for graph for deadlocks, since
 this might be more complex than just two statements involved. Deadlocks
 ought to be so infrequent that we can afford the log space to do this -
 plus if we did this it would likely lead to fewer deadlocks.
 
 For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
 that would allow you to dump the wait-for graph for any data-level locks
 that wait too long, rather than just those that deadlock. Many
 applications experience heavy locking because of lack of holistic
 design. That will also show up the need for other utilities to act
 CONCURRENTLY, if possible.

Old email, but I don't see how our current output is not good enough?

test= lock a;
ERROR:  deadlock detected
DETAIL:  Process 6855 waits for AccessExclusiveLock on relation 16394 of
database 16384; blocked by process 6795.
Process 6795 waits for AccessExclusiveLock on relation 16396 of database
16384; blocked by process 6855.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Deadlock with pg_dump?

2007-02-13 Thread Bruce Momjian
Tom Lane wrote:
 Albe Laurenz [EMAIL PROTECTED] writes:
  [ Memo to hackers: why is it that log_min_error_statement = error
  isn't the default? ]
 
  To avoid spamming the logs with every failed SQL statement?
 
 Certainly there are people who will turn it off, but that's why it's
 configurable.  I've had to answer how do I find out what's causing
 error message FOO often enough that I'm starting to think logging error
 statements is a more useful default than not logging 'em ...

Are we ready to set 'log_min_error_statement = error' by default for
8.3?

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Deadlock with pg_dump?

2007-02-13 Thread Tom Lane
Bruce Momjian [EMAIL PROTECTED] writes:
 Are we ready to set 'log_min_error_statement = error' by default for
 8.3?

We already did that in 8.2.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Deadlock with pg_dump?

2007-02-13 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Are we ready to set 'log_min_error_statement = error' by default for
  8.3?
 
 We already did that in 8.2.

Oh, interesting.  Oops again.

-- 
  Bruce Momjian  [EMAIL PROTECTED]  http://momjian.us
  EnterpriseDB   http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Deadlock with pg_dump?

2006-10-30 Thread Simon Riggs
On Thu, 2006-10-26 at 18:45 -0400, Tom Lane wrote:
 Chris Campbell [EMAIL PROTECTED] writes:
  Is there additional logging information I can turn on to get more  
  details? I guess I need to see exactly what locks both processes  
  hold, and what queries they were running when the deadlock occurred?  
  Is that easily done, without turning on logging for *all* statements?
 
 log_min_error_statement = error would at least get you the statements
 reporting the deadlocks, though not what they're conflicting against.

Yeh, we need a much better locking logger for performance analysis.

We really need to dump the whole wait-for graph for deadlocks, since
this might be more complex than just two statements involved. Deadlocks
ought to be so infrequent that we can afford the log space to do this -
plus if we did this it would likely lead to fewer deadlocks.

For 8.3 I'd like to have a log_min_duration_lockwait (secs) parameter
that would allow you to dump the wait-for graph for any data-level locks
that wait too long, rather than just those that deadlock. Many
applications experience heavy locking because of lack of holistic
design. That will also show up the need for other utilities to act
CONCURRENTLY, if possible.

 [ Memo to hackers: why is it that log_min_error_statement = error
 isn't the default? ]

For production systems where we expect fewer ERRORs, each one is
important, so this would be a good default.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Deadlock with pg_dump?

2006-10-27 Thread Albe Laurenz
 [ Memo to hackers: why is it that log_min_error_statement = error
 isn't the default? ]

To avoid spamming the logs with every failed SQL statement?

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Deadlock with pg_dump?

2006-10-27 Thread Csaba Nagy
On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:
  [ Memo to hackers: why is it that log_min_error_statement = error
  isn't the default? ]
 
 To avoid spamming the logs with every failed SQL statement?

And it would be hurting applications where query failure is taken as a
valid path (as inserting first and update if failing)...

Cheers,
Csaba.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Deadlock with pg_dump?

2006-10-27 Thread Andrew Dunstan
Csaba Nagy wrote:
 On Fri, 2006-10-27 at 09:23, Albe Laurenz wrote:
  [ Memo to hackers: why is it that log_min_error_statement = error
  isn't the default? ]

 To avoid spamming the logs with every failed SQL statement?

 And it would be hurting applications where query failure is taken as a
 valid path (as inserting first and update if failing)...


Both of these are arguments in favor of being able to alter the level,
which would still exist. They are not good arguments for not having error
as the default level.

cheers

andrew


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [HACKERS] Deadlock with pg_dump?

2006-10-27 Thread Tom Lane
Albe Laurenz [EMAIL PROTECTED] writes:
 [ Memo to hackers: why is it that log_min_error_statement = error
 isn't the default? ]

 To avoid spamming the logs with every failed SQL statement?

Certainly there are people who will turn it off, but that's why it's
configurable.  I've had to answer how do I find out what's causing
error message FOO often enough that I'm starting to think logging error
statements is a more useful default than not logging 'em ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Tom Lane
Chris Campbell [EMAIL PROTECTED] writes:
 ERROR: deadlock detected
 DETAIL: Process 1120 waits for ShareLock on transaction 5847116;  
 blocked by process 1171.
 Process 1171 waits for ExclusiveLock on tuple (6549,28) of relation  
 37637 of database 37574; blocked by process 1120.

 Relation 37636 is the users table (schema attached).

 Process 1120 was running an UPDATE query and changing a single row in  
 the users table.

And what was 1171 doing?  I really doubt that either of these could have
been pg_dump.

Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either.  I'm betting these
are just flat out conflicting updates of the same row(s).

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Chris Campbell

On Oct 26, 2006, at 17:21, Tom Lane wrote:

And what was 1171 doing?  I really doubt that either of these could  
have

been pg_dump.


I know that process 1120 is a Java client (Hibernate) running an  
UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
throughout the day. That's not definitive evidence, of course, but  
it's certainly curious.



Given that you appear to be running 8.1 (tut-tut for not saying), it
really shouldn't be a foreign key problem either.  I'm betting these
are just flat out conflicting updates of the same row(s).


Yeah, 8.1.3. Sorry about the omission.

Is there additional logging information I can turn on to get more  
details? I guess I need to see exactly what locks both processes  
hold, and what queries they were running when the deadlock occurred?  
Is that easily done, without turning on logging for *all* statements?


Thanks!

- Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Jim C. Nasby
On Thu, Oct 26, 2006 at 06:11:59PM -0400, Chris Campbell wrote:
 On Oct 26, 2006, at 17:21, Tom Lane wrote:
 
 And what was 1171 doing?  I really doubt that either of these could  
 have
 been pg_dump.
 
 I know that process 1120 is a Java client (Hibernate) running an  
 UPDATE query, but I have no idea what 1171 is. I doubt that 1171 was  
 pg_dump, but when we turn off the pg_dump cron jobs (for 12-ish  
 hours), the deadlocks go away. We usually see 5 or 6 deadlocks spread  
 throughout the day. That's not definitive evidence, of course, but  
 it's certainly curious.

I seem to remember something funny about hibernate and locking, though I
can't recall any details right now... but searching the archives might
provide insight.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Tom Lane
Chris Campbell [EMAIL PROTECTED] writes:
 Is there additional logging information I can turn on to get more  
 details? I guess I need to see exactly what locks both processes  
 hold, and what queries they were running when the deadlock occurred?  
 Is that easily done, without turning on logging for *all* statements?

log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.
Still, if you're seeing multiple occurrences per day, that would
probably let you build up a good picture of all the involved operations
over a couple of days.

[ Memo to hackers: why is it that log_min_error_statement = error
isn't the default? ]

Unless your applications are issuing actual LOCK TABLE commands, it's
really hard to see how pg_dump could be involved.  It doesn't take
anything stronger than AccessShareLock, and shouldn't be holding any
tuple-level locks at all.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Alvaro Herrera
Tom Lane wrote:

 [ Memo to hackers: why is it that log_min_error_statement = error
 isn't the default? ]

I think it default to panic because it's the way to disable the feature,
which was the easiest sell when the feature was introduced.  I'm all for
lowering it to error.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Deadlock with pg_dump?

2006-10-26 Thread Chris Campbell

On Oct 26, 2006, at 18:45, Tom Lane wrote:


log_min_error_statement = error would at least get you the statements
reporting the deadlocks, though not what they're conflicting against.


Would it be possible (in 8.3, say) to log the conflicting backend's  
current statement (from pg_stat_activity, perhaps)? I guess the  
conflicting backend would currently be waiting for a lock, so its  
current query (before releasing the lock) is the one we want.


Thanks!

- Chris


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq