Re: [HACKERS] We need to log aborted autovacuums

2011-02-07 Thread Robert Haas
On Sat, Feb 5, 2011 at 3:34 PM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
 Anyway, without GUC is fine too as it won't fill the /var/log itself !
 I am just not opposed to have new GUC in those areas (log  debug).

OK.  Committed without a new GUC, at least for now.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-05 Thread Cédric Villemain
2011/2/4 Robert Haas robertmh...@gmail.com:
 On Sun, Jan 30, 2011 at 10:26 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Jan 30, 2011 at 10:03 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of dom ene 30 23:37:51 -0300 2011:

 Unless I'm missing something, making autovacuum.c call
 ConditionalLockRelationOid() is not going to work, because the vacuum
 transaction isn't started until we get all the way down to
 vacuum_rel().

 Maybe we need ConditionalLockRelationOidForSession or something like
 that?

 That'd be another way to go, if there are objections to what I've
 implemented here.

 Seeing as how there seem to be neither objections nor endorsements,
 I'm inclined to commit what I proposed


what do you implement exactly ?
* The original request from Josh to get LOG when autovac can not run
because of locks
* VACOPT_NOWAIT, what is it ?


more or less as-is.  There
 remains the issue of what do about the log spam.  Josh Berkus
 suggested logging it when log_autovacuum_min_duration != -1, which
 seems like a bit of an abuse of that setting, but it's certainly not
 worth adding another setting for, and the alternative of logging it
 at, say, DEBUG2 seems unappealing because you'll then have to turn on
 logging for a lot of unrelated crap to get this information.  So on
 balance I think that proposal is perhaps the least of evils.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-05 Thread Robert Haas
On Sat, Feb 5, 2011 at 12:54 PM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
 what do you implement exactly ?
 * The original request from Josh to get LOG when autovac can not run
 because of locks
 * VACOPT_NOWAIT, what is it ?

What the patch implements is:

If autovacuum can't get the table lock immediately, it skips the
table.  This leaves the table still appearing to need autovacuuming,
so autovacuum will keep retrying (every 1 minute, or whatever you have
autovacuum_naptime set to) until it gets the lock.  This seems clearly
better than the historical behavior of blocking on the lock, which can
potentially keep other tables in the database from getting vacuumed.

In the case where a table is skipped for this reason, we log a message
at log level LOG.  The version of the patch I posted does that
unconditionally, but my intention was to change it before commit so
that it only logs the message if log_autovacuum_min_duration is
something other than -1.

Regular vacuum behaves as before - it waits for each table lock
individually.  We could expose a NOWAIT option at the SQL level as
well, so that someone could do VACOPT (NOWAIT), if that's something
people want.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-05 Thread Cédric Villemain
2011/2/5 Robert Haas robertmh...@gmail.com:
 On Sat, Feb 5, 2011 at 12:54 PM, Cédric Villemain
 cedric.villemain.deb...@gmail.com wrote:
 what do you implement exactly ?
 * The original request from Josh to get LOG when autovac can not run
 because of locks
 * VACOPT_NOWAIT, what is it ?

 What the patch implements is:

 If autovacuum can't get the table lock immediately, it skips the
 table.  This leaves the table still appearing to need autovacuuming,
 so autovacuum will keep retrying (every 1 minute, or whatever you have
 autovacuum_naptime set to) until it gets the lock.  This seems clearly
 better than the historical behavior of blocking on the lock, which can
 potentially keep other tables in the database from getting vacuumed.

great :)


 In the case where a table is skipped for this reason, we log a message
 at log level LOG.  The version of the patch I posted does that
 unconditionally, but my intention was to change it before commit so
 that it only logs the message if log_autovacuum_min_duration is
 something other than -1.

yes looks better, I also note that someone suggest to not add a GUC for that.
I am unsure about that, and adding a parameter for that does not hurt me at all:
reducing number of parameters for memory/performance/... is fine
(well, it is very good when we can), but to improve the log output I
think it is ok to add more without much trouble.


 Regular vacuum behaves as before - it waits for each table lock
 individually.  We could expose a NOWAIT option at the SQL level as
 well, so that someone could do VACOPT (NOWAIT), if that's something
 people want.

Might be useful, yes.

-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-05 Thread Robert Haas
On Sat, Feb 5, 2011 at 3:20 PM, Cédric Villemain
cedric.villemain.deb...@gmail.com wrote:
 In the case where a table is skipped for this reason, we log a message
 at log level LOG.  The version of the patch I posted does that
 unconditionally, but my intention was to change it before commit so
 that it only logs the message if log_autovacuum_min_duration is
 something other than -1.

 yes looks better, I also note that someone suggest to not add a GUC for that.

I think a GUC just to control this one message is overkill.  Chances
are that if you're logging some or all of your autovacuum runs you'll
want to have this too, or at least it won't be a major problem to get
it anyway.  If for some reason you want ONLY this message, you can
effectively get that behavior too, but setting
log_autovacuum_min_duration to an enormous value.  So I can't really
imagine why you'd need a separate knob just for this one thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-05 Thread Cédric Villemain
2011/2/5 Robert Haas robertmh...@gmail.com:
 On Sat, Feb 5, 2011 at 3:20 PM, Cédric Villemain
 cedric.villemain.deb...@gmail.com wrote:
 In the case where a table is skipped for this reason, we log a message
 at log level LOG.  The version of the patch I posted does that
 unconditionally, but my intention was to change it before commit so
 that it only logs the message if log_autovacuum_min_duration is
 something other than -1.

 yes looks better, I also note that someone suggest to not add a GUC for that.

 I think a GUC just to control this one message is overkill.  Chances
 are that if you're logging some or all of your autovacuum runs you'll
 want to have this too, or at least it won't be a major problem to get
 it anyway.  If for some reason you want ONLY this message, you can
 effectively get that behavior too, but setting
 log_autovacuum_min_duration to an enormous value.  So I can't really
 imagine why you'd need a separate knob just for this one thing.

to not output it when I want log_autovac_min_duration = 0 but I know
that some tables do have locks contention and may loose maintenance
for one hour or so.

Anyway, without GUC is fine too as it won't fill the /var/log itself !
I am just not opposed to have new GUC in those areas (log  debug).

-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-04 Thread Robert Haas
On Fri, Feb 4, 2011 at 12:59 AM, Josh Berkus j...@agliodbs.com wrote:
 Robert,

 Seeing as how there seem to be neither objections nor endorsements,
 I'm inclined to commit what I proposed more or less as-is.  There
 remains the issue of what do about the log spam.  Josh Berkus
 suggested logging it when log_autovacuum_min_duration != -1, which
 seems like a bit of an abuse of that setting, but it's certainly not
 worth adding another setting for, and the alternative of logging it
 at, say, DEBUG2 seems unappealing because you'll then have to turn on
 logging for a lot of unrelated crap to get this information.  So on
 balance I think that proposal is perhaps the least of evils.

 Wait, which proposal is?

I meant to say that I think the least evil proposal is to log at LOG
when log_autovacuum_min_duration != -1.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-02-03 Thread Robert Haas
On Sun, Jan 30, 2011 at 10:26 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sun, Jan 30, 2011 at 10:03 PM, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of dom ene 30 23:37:51 -0300 2011:

 Unless I'm missing something, making autovacuum.c call
 ConditionalLockRelationOid() is not going to work, because the vacuum
 transaction isn't started until we get all the way down to
 vacuum_rel().

 Maybe we need ConditionalLockRelationOidForSession or something like
 that?

 That'd be another way to go, if there are objections to what I've
 implemented here.

Seeing as how there seem to be neither objections nor endorsements,
I'm inclined to commit what I proposed more or less as-is.  There
remains the issue of what do about the log spam.  Josh Berkus
suggested logging it when log_autovacuum_min_duration != -1, which
seems like a bit of an abuse of that setting, but it's certainly not
worth adding another setting for, and the alternative of logging it
at, say, DEBUG2 seems unappealing because you'll then have to turn on
logging for a lot of unrelated crap to get this information.  So on
balance I think that proposal is perhaps the least of evils.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-30 Thread Robert Haas
On Mon, Jan 17, 2011 at 4:08 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Simon Riggs si...@2ndquadrant.com writes:
 On Mon, 2011-01-17 at 14:46 -0500, Tom Lane wrote:
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

 I have a patch to do this BUT the log message is harder. Until we grab
 the lock we can't confirm the table still exists, so we can't get the
 name for it. The whole point of this is logging the names of tables for
 which we have failed to AV. Suggestions?

 As I said before, the correct place to fix this is in autovacuum.c,
 which has the table name near at hand.  If you insist on fixing it
 somewhere else it's going to be very invasive.

Unless I'm missing something, making autovacuum.c call
ConditionalLockRelationOid() is not going to work, because the vacuum
transaction isn't started until we get all the way down to
vacuum_rel().  But it seems not too hard to make vacuum_rel() and
analyze_rel() have the proper relation name - we just need to
manufacture a suitable RangeVar to pass via the VacuumStmt's
relation element.

See what you think of the attached...  I think logging this message at
LOG unconditionally is probably log spam of the first order, but if
the basic concept is sound we can argue over what to do about that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 7bc5f11..618051a 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -148,7 +148,18 @@ analyze_rel(Oid relid, VacuumStmt *vacstmt,
 	 * matter if we ever try to accumulate stats on dead tuples.) If the rel
 	 * has been dropped since we last saw it, we don't need to process it.
 	 */
-	onerel = try_relation_open(relid, ShareUpdateExclusiveLock);
+	if (!(vacstmt-options  VACOPT_NOWAIT))
+		onerel = try_relation_open(relid, ShareUpdateExclusiveLock);
+	else if (ConditionalLockRelationOid(relid, ShareUpdateExclusiveLock))
+		onerel = try_relation_open(relid, NoLock);
+	else
+	{
+		onerel = NULL;
+		ereport(LOG,
+(errcode(ERRCODE_LOCK_NOT_AVAILABLE),
+ errmsg(skipping analyze of \%s\ --- lock not available,
+	vacstmt-relation-relname)));
+	}
 	if (!onerel)
 		return;
 
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 5663711..7ac073b 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -61,7 +61,7 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static List *get_rel_oids(Oid relid, const RangeVar *vacrel);
 static void vac_truncate_clog(TransactionId frozenXID);
-static void vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast,
+static bool vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast,
 		   bool for_wraparound, bool *scanned_all);
 
 
@@ -226,8 +226,11 @@ vacuum(VacuumStmt *vacstmt, Oid relid, bool do_toast,
 			bool		scanned_all = false;
 
 			if (vacstmt-options  VACOPT_VACUUM)
-vacuum_rel(relid, vacstmt, do_toast, for_wraparound,
-		   scanned_all);
+			{
+if (!vacuum_rel(relid, vacstmt, do_toast, for_wraparound,
+scanned_all))
+	continue;
+			}
 
 			if (vacstmt-options  VACOPT_ANALYZE)
 			{
@@ -764,7 +767,7 @@ vac_truncate_clog(TransactionId frozenXID)
  *
  *		At entry and exit, we are not inside a transaction.
  */
-static void
+static bool
 vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast, bool for_wraparound,
 		   bool *scanned_all)
 {
@@ -835,14 +838,28 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast, bool for_wraparound,
 	 *
 	 * There's a race condition here: the rel may have gone away since the
 	 * last time we saw it.  If so, we don't need to vacuum it.
+	 *
+	 * If we've been asked not to wait for the relation lock, acquire it
+	 * first in non-blocking mode, before calling try_relation_open().
 	 */
-	onerel = try_relation_open(relid, lmode);
+	if (!(vacstmt-options  VACOPT_NOWAIT))
+		onerel = try_relation_open(relid, lmode);
+	else if (ConditionalLockRelationOid(relid, lmode))
+		onerel = try_relation_open(relid, NoLock);
+	else
+	{
+		onerel = NULL;
+		ereport(LOG,
+(errcode(ERRCODE_LOCK_NOT_AVAILABLE),
+ errmsg(skipping vacuum of \%s\ --- lock not available,
+	vacstmt-relation-relname)));
+	}
 
 	if (!onerel)
 	{
 		PopActiveSnapshot();
 		CommitTransactionCommand();
-		return;
+		return false;
 	}
 
 	/*
@@ -873,7 +890,7 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast, bool for_wraparound,
 		relation_close(onerel, lmode);
 		PopActiveSnapshot();
 		CommitTransactionCommand();
-		return;
+		return false;
 	}
 
 	/*
@@ -890,7 +907,7 @@ vacuum_rel(Oid relid, VacuumStmt *vacstmt, bool do_toast, bool 

Re: [HACKERS] We need to log aborted autovacuums

2011-01-30 Thread Alvaro Herrera
Excerpts from Robert Haas's message of dom ene 30 23:37:51 -0300 2011:

 Unless I'm missing something, making autovacuum.c call
 ConditionalLockRelationOid() is not going to work, because the vacuum
 transaction isn't started until we get all the way down to
 vacuum_rel().

Maybe we need ConditionalLockRelationOidForSession or something like
that?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-30 Thread Robert Haas
On Sun, Jan 30, 2011 at 10:03 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of dom ene 30 23:37:51 -0300 2011:

 Unless I'm missing something, making autovacuum.c call
 ConditionalLockRelationOid() is not going to work, because the vacuum
 transaction isn't started until we get all the way down to
 vacuum_rel().

 Maybe we need ConditionalLockRelationOidForSession or something like
 that?

That'd be another way to go, if there are objections to what I've
implemented here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 However, we'd want a separate lock timeout for autovac, of course.  I'm
 not at all keen on a *statement* timeout on autovacuum; as long as
 autovacuum is doing work, I don't want to cancel it.  Also, WTF would we
 set it to?

Yeah --- in the presence of vacuum cost delay, in particular, a
statement timeout seems about useless for AV.

 Going the statement timeout route seems like a way to create a LOT of
 extra work, troubleshooting, getting it wrong, and releasing patch
 updates.  Please let's just create a lock timeout.

Do we actually need a lock timeout either?  The patch that was being
discussed just involved failing if you couldn't get it immediately.
I suspect that's sufficient for AV.  At least, nobody's made a
compelling argument why we need to expend a very substantially larger
amount of work to do something different.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Simon Riggs
On Mon, 2011-01-17 at 14:46 -0500, Tom Lane wrote:
 Josh Berkus j...@agliodbs.com writes:
  However, we'd want a separate lock timeout for autovac, of course.  I'm
  not at all keen on a *statement* timeout on autovacuum; as long as
  autovacuum is doing work, I don't want to cancel it.  Also, WTF would we
  set it to?
 
 Yeah --- in the presence of vacuum cost delay, in particular, a
 statement timeout seems about useless for AV.
 
  Going the statement timeout route seems like a way to create a LOT of
  extra work, troubleshooting, getting it wrong, and releasing patch
  updates.  Please let's just create a lock timeout.
 
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

I have a patch to do this BUT the log message is harder. Until we grab
the lock we can't confirm the table still exists, so we can't get the
name for it. The whole point of this is logging the names of tables for
which we have failed to AV. Suggestions?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On Mon, 2011-01-17 at 14:46 -0500, Tom Lane wrote:
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

 I have a patch to do this BUT the log message is harder. Until we grab
 the lock we can't confirm the table still exists, so we can't get the
 name for it. The whole point of this is logging the names of tables for
 which we have failed to AV. Suggestions?

As I said before, the correct place to fix this is in autovacuum.c,
which has the table name near at hand.  If you insist on fixing it
somewhere else it's going to be very invasive.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Josh Berkus
On 1/17/11 11:46 AM, Tom Lane wrote:
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

The argument is that a sufficiently busy table might never get
autovacuumed *at all*, whereas a small lock wait would allow autovacuum
to block incoming transactions and start work.

However, it's hard for me to imagine a real-world situation where a
table would be under repeated full-table-locks from multiple
connections.  Can anyone else?

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Robert Haas
On Mon, Jan 17, 2011 at 8:26 PM, Josh Berkus j...@agliodbs.com wrote:
 On 1/17/11 11:46 AM, Tom Lane wrote:
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

 The argument is that a sufficiently busy table might never get
 autovacuumed *at all*, whereas a small lock wait would allow autovacuum
 to block incoming transactions and start work.

 However, it's hard for me to imagine a real-world situation where a
 table would be under repeated full-table-locks from multiple
 connections.  Can anyone else?

I'm not convinced we need a lock timeout for autovacuum.  I think it'd
be useful to expose on a user-level, but that's a different can of
worms.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 On 1/17/11 11:46 AM, Tom Lane wrote:
 Do we actually need a lock timeout either?  The patch that was being
 discussed just involved failing if you couldn't get it immediately.
 I suspect that's sufficient for AV.  At least, nobody's made a
 compelling argument why we need to expend a very substantially larger
 amount of work to do something different.

 The argument is that a sufficiently busy table might never get
 autovacuumed *at all*, whereas a small lock wait would allow autovacuum
 to block incoming transactions and start work.

 However, it's hard for me to imagine a real-world situation where a
 table would be under repeated full-table-locks from multiple
 connections.  Can anyone else?

If that is happening, autovacuum is screwed anyway.  Even if it manages
to acquire the lock, it will get kicked off by the next lock request
before it can finish vacuuming the table.  So I don't see an argument
here for adding all the mechanism required to support that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-17 Thread Peter Eisentraut
On mån, 2011-01-17 at 17:26 -0800, Josh Berkus wrote:
 However, it's hard for me to imagine a real-world situation where a
 table would be under repeated full-table-locks from multiple
 connections.  Can anyone else?

If you want to do assertion-type checks at the end of transactions in
the current fake-serializable mode, you need to lock the table.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Greg Smith

Robert Haas wrote:

On Sat, Jan 15, 2011 at 11:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  

Greg Smith g...@2ndquadrant.com writes:


Does try_relation_open need to have a lock acquisition timeout when AV
is calling it?
  

Hmm.  I think when looking at the AV code, I've always subconsciously
assumed that try_relation_open would fail immediately if it couldn't get
the lock.  That certainly seems like it would be a more appropriate way
to behave than delaying indefinitely.



I'm confused how that's not happening already. What does try mean, otherwise?
  


Apparently try means acquire the requested lock on the oid of the 
relation, waiting for any amount of time for that part, and then check 
if the relation really exists or not once it's got it.  In this context, 
it means it will try to open the relation, but might fail if it doesn't 
actually exist anymore.  The relation not existing once it tries the 
check done after the lock is acquired is the only way it will return a 
failure.


try_relation_open calls LockRelationOid, which blocks.  There is also a 
ConditionalLockRelationOid, which does the same basic thing except it 
exits immediately, with a false return code, if it can't acquire the 
lock.  I think we just need to nail down in what existing cases it's 
acceptable to have try_relation_oid use ConditionalLockRelationOid 
instead, which would make it do what all us reading the code thought it 
did all along.


There are four callers of try_relation_open to be concerned about here:

src/backend/commands/vacuum.cvacuum_rel
   onerel = try_relation_open(relid, lmode);

src/backend/commands/analyze.canalyze_rel
   onerel = try_relation_open(relid, ShareUpdateExclusiveLock);

src/backend/commands/cluster.ccluster_rel
   OldHeap = try_relation_open(tableOid, AccessExclusiveLock);

src/backend/commands/lockcmds.c LockTableRecurse
   * Apply LOCK TABLE recursively over an inheritance tree
   rel = try_relation_open(reloid, NoLock);

I think that both the vacuum_rel and analyze_rel cases are capable of 
figuring out if they are the autovacuum process, and if so calling the 
fast non-blocking version of this.  I wouldn't want to mess with the 
other two, which rely upon the current behavior as far as I can see.


Probably took me longer to write this e-mail than the patch will take.  
Since I've got trivial patch fever this weekend and already have the 
test case, I'll do this one next.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books



Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 try_relation_open calls LockRelationOid, which blocks.  There is also a 
 ConditionalLockRelationOid, which does the same basic thing except it 
 exits immediately, with a false return code, if it can't acquire the 
 lock.  I think we just need to nail down in what existing cases it's 
 acceptable to have try_relation_oid use ConditionalLockRelationOid 
 instead, which would make it do what all us reading the code thought it 
 did all along.

No, I don't believe we should be messing with the semantics of
try_relation_open.  It is what it is.

The right way to fix this is similar to what LockTableRecurse does,
ie call ConditionalLockRelationOid itself.  I tried changing vacuum_rel
that way yesterday, but the idea crashed when I realized that vacuum_rel
doesn't have the name of the target relation, only its OID, so it can't
log any very useful message ... and according to the original point of
this thread, we're surely going to want an elog(LOG) when we can't get
the lock.

I think the best thing to do is probably to have autovacuum.c do the
ConditionalLockRelationOid call before entering vacuum.c (making the
later acquisition of the lock by try_relation_open redundant, but it
will be cheap enough to not matter).  But I haven't chased the details.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Simon Riggs
On Sun, 2011-01-16 at 11:47 -0500, Tom Lane wrote:
 Greg Smith g...@2ndquadrant.com writes:
  try_relation_open calls LockRelationOid, which blocks.  There is also a 
  ConditionalLockRelationOid, which does the same basic thing except it 
  exits immediately, with a false return code, if it can't acquire the 
  lock.  I think we just need to nail down in what existing cases it's 
  acceptable to have try_relation_oid use ConditionalLockRelationOid 
  instead, which would make it do what all us reading the code thought it 
  did all along.
 
 No, I don't believe we should be messing with the semantics of
 try_relation_open.  It is what it is.
 
 The right way to fix this is similar to what LockTableRecurse does,
 ie call ConditionalLockRelationOid itself.  I tried changing vacuum_rel
 that way yesterday, but the idea crashed when I realized that vacuum_rel
 doesn't have the name of the target relation, only its OID, so it can't
 log any very useful message ... and according to the original point of
 this thread, we're surely going to want an elog(LOG) when we can't get
 the lock.
 
 I think the best thing to do is probably to have autovacuum.c do the
 ConditionalLockRelationOid call before entering vacuum.c (making the
 later acquisition of the lock by try_relation_open redundant, but it
 will be cheap enough to not matter).  But I haven't chased the details.

I'm fairly confused by this thread.

We *do* emit a message when we cancel an autovacuum task. We went to a
lot of trouble to do that. The message is DEBUG2, and says
sending cancel to blocking autovacuum pid =.

We just need to make that LOG, not DEBUG2.

The autovacuum itself then says canceling autovacuum task when
canceled. It doesn't say what table the autovacuum was running on when
cancelled, but that seems like an easy thing to add since the AV does
know that.

I can't see any reason to differentiate between manually canceled AVs
and automatically canceled AVs. It's all the same thing.

Not really sure what it is you're talking about above or how that
relates to log messages for AV.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 I'm fairly confused by this thread.

 We *do* emit a message when we cancel an autovacuum task. We went to a
 lot of trouble to do that. The message is DEBUG2, and says
 sending cancel to blocking autovacuum pid =.

That doesn't necessarily match one-to-one with actual cancellations,
nor does it cover the case Greg is on about at the moment of an AV
worker being blocked indefinitely because it can't get the table
lock in the first place.

It might be an adequate substitute, but on the whole I agree with
the idea that it'd be better to have autovacuum log when it actually
cancels an operation, not when someone tries to cancel one.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Greg Smith

Simon Riggs wrote:

I'm fairly confused by this thread.
  


That's becuase you think it has something to do with cancellation, which 
it doesn't.  The original report here noted a real problem but got the 
theorized cause wrong.  It turns out the code that acquires a lock when 
autovacuum decides it is going to process something will wait forever 
for that lock to be obtained.  It cannot be the case that other locks on 
the table are causing it to cancel, or as you say it would be visible in 
the logs.  Instead the AV worker will just queue up and wait for its 
turn as long as it takes.


That does mean there's all sorts of ways that your AV workers can all 
get stuck where they are waiting for a table, and there's no way to know 
when that's happening either from the logs; you'll only see it in ps or 
pg_stat_activity.  Given that I think it's actually a mild denial of 
service attack vector, this really needs an improvement.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Simon Riggs wrote:
 I'm fairly confused by this thread.

 That's becuase you think it has something to do with cancellation, which 
 it doesn't.  The original report here noted a real problem but got the 
 theorized cause wrong.

I think that cancellations are also a potentially important issue that
could do with being logged.  The issue that I see is that if an
application has a use-pattern that involves obtaining exclusive lock on
a large table fairly frequently, then AV will never be able to complete
on that table, leading to bloat and eventual XID wraparound issues.
Once we get scared about XID wraparound, AV will refuse to let itself
be canceled, so it will prevent the wraparound ... at the cost of
denying service to the application code path that wants the lock.
So this is the sort of thing that it'd be good to have some bleating
about in the log, giving the DBA a chance to rethink the app's locking
behavior before the consequences get nasty.

But, having said that, false alarms in the log are not nice.  So I'd
rather have the LOG messages coming out from actual transaction aborts
in AV, not from the remote end of the signal.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Greg Smith

Tom Lane wrote:

No, I don't believe we should be messing with the semantics of
try_relation_open.  It is what it is.
  


With only four pretty simple callers to the thing, and two of them 
needing the alternate behavior, it seemed a reasonable place to modify 
to me.  I thought the nowait boolean idea was in enough places that it 
was reasonable to attach to try_relation_open.


Attached patch solves the wait for lock forever problem, and 
introduces a new log message when AV or auto-analyze fail to obtain a 
lock on something that needs to be cleaned up:


DEBUG:  autovacuum: processing database gsmith
INFO:  autovacuum skipping relation 65563 --- cannot open or obtain lock
INFO:  autoanalyze skipping relation 65563 --- cannot open or obtain lock

My main concern is that this may cause AV to constantly fail to get 
access to a busy table, where in the current code it would queue up and 
eventually get the lock needed.  A secondary issue is that while the 
autovacuum messages only show up if you have log_autovacuum_min_duration 
set to not -1, the autoanalyze ones can't be stopped.


If you don't like the way I structured the code, you can certainly do it 
some other way instead.  I thought this approach was really simple and 
not unlike similar code elsewhere.


Here's the test case that worked for me here again:

psql
SHOW log_autovacuum_min_duration;
DROP TABLE t;
CREATE TABLE t(s serial,i integer);
INSERT INTO t(i) SELECT generate_series(1,10);
SELECT relname,last_autovacuum,autovacuum_count FROM pg_stat_user_tables 
WHERE relname='t';

DELETE FROM t WHERE s5;
\q
psql
BEGIN;
LOCK t;

Leave that open, then go to anther session with old tail -f on the 
logs to wait for the errors to show up.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 25d9fde..4193fff 100644
*** a/src/backend/access/heap/heapam.c
--- b/src/backend/access/heap/heapam.c
*** relation_open(Oid relationId, LOCKMODE l
*** 918,936 
   *
   *		Same as relation_open, except return NULL instead of failing
   *		if the relation does not exist.
   * 
   */
  Relation
! try_relation_open(Oid relationId, LOCKMODE lockmode)
  {
  	Relation	r;
! 
  	Assert(lockmode = NoLock  lockmode  MAX_LOCKMODES);
  
  	/* Get the lock first */
  	if (lockmode != NoLock)
! 		LockRelationOid(relationId, lockmode);
! 
  	/*
  	 * Now that we have the lock, probe to see if the relation really exists
  	 * or not.
--- 918,951 
   *
   *		Same as relation_open, except return NULL instead of failing
   *		if the relation does not exist.
+  *
+  *		If called with nowait enabled, this will immediately exit
+  *		if a lock is requested and it can't be acquired.  The
+  *		return code in this case doesn't distinguish between this
+  *		situation and the one where the relation was locked, but
+  *		doesn't exist.  Callers using nowait must not care that
+  *		they won't be able to tell the difference.
   * 
   */
  Relation
! try_relation_open(Oid relationId, LOCKMODE lockmode, bool nowait)
  {
  	Relation	r;
! 	bool		locked;
  	Assert(lockmode = NoLock  lockmode  MAX_LOCKMODES);
  
  	/* Get the lock first */
  	if (lockmode != NoLock)
! 		{
! 		if (nowait)
! 			{
! 			locked=ConditionalLockRelationOid(relationId, lockmode);
! 			if (!locked)
! return NULL;	
! 			}
! 		else
! 			LockRelationOid(relationId, lockmode);
! 		}
  	/*
  	 * Now that we have the lock, probe to see if the relation really exists
  	 * or not.
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 7bc5f11..24bfb16 100644
*** a/src/backend/commands/analyze.c
--- b/src/backend/commands/analyze.c
*** analyze_rel(Oid relid, VacuumStmt *vacst
*** 147,156 
  	 * concurrent VACUUM, which doesn't matter much at the moment but might
  	 * matter if we ever try to accumulate stats on dead tuples.) If the rel
  	 * has been dropped since we last saw it, we don't need to process it.
  	 */
! 	onerel = try_relation_open(relid, ShareUpdateExclusiveLock);
  	if (!onerel)
! 		return;
  
  	/*
  	 * Check permissions --- this should match vacuum's check!
--- 147,168 
  	 * concurrent VACUUM, which doesn't matter much at the moment but might
  	 * matter if we ever try to accumulate stats on dead tuples.) If the rel
  	 * has been dropped since we last saw it, we don't need to process it.
+ 	 *
+ 	 * If this is a manual analyze, opening will wait forever to acquire
+ 	 * the requested lock on the relation.  Autovacuum will just give up
+ 	 * immediately if it can't get the lock.  This prevents a series of locked
+ 	 * relations from potentially hanging all of the AV workers waiting
+ 	 * for locks.
  	 */
! 	onerel = try_relation_open(relid, 

Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Tom Lane wrote:
 No, I don't believe we should be messing with the semantics of
 try_relation_open.  It is what it is.

 With only four pretty simple callers to the thing, and two of them 
 needing the alternate behavior, it seemed a reasonable place to modify 
 to me.  I thought the nowait boolean idea was in enough places that it 
 was reasonable to attach to try_relation_open.

I would be willing to do that if it actually fixed the problem, but it
doesn't.  Logging only the table OID and not the table name is entirely
inadequate IMO, so the fix has to be at a different place anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Simon Riggs
On Sun, 2011-01-16 at 13:08 -0500, Greg Smith wrote:
 Simon Riggs wrote:
  I'm fairly confused by this thread.

 
 That's becuase you think it has something to do with cancellation, which 
 it doesn't.  The original report here noted a real problem but got the 
 theorized cause wrong.  It turns out the code that acquires a lock when 
 autovacuum decides it is going to process something will wait forever 
 for that lock to be obtained.  It cannot be the case that other locks on 
 the table are causing it to cancel, or as you say it would be visible in 
 the logs.  Instead the AV worker will just queue up and wait for its 
 turn as long as it takes.

OK, thanks for explaining.

So my proposed solution is to set a statement_timeout on autovacuum
tasks, so that the AV does eventually get canceled, if the above
mentioned case occurs. We can scale that timeout to the size of the
table.

Now every issue is a cancelation and we can handle it the way I
suggested in my last post.

I would prefer it if we had a settable lock timeout, as suggested many
moons ago. When that was discussed before it was said there was no
difference between a statement timeout and a lock timeout, but I think
there clearly is, this case being just one example.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Josh Berkus
On 1/16/11 11:19 AM, Simon Riggs wrote:
 I would prefer it if we had a settable lock timeout, as suggested many
 moons ago. When that was discussed before it was said there was no
 difference between a statement timeout and a lock timeout, but I think
 there clearly is, this case being just one example.

Whatever happend to lock timeouts, anyway?  We even had some patches
floating around for 9.0 and they disappeared.

However, we'd want a separate lock timeout for autovac, of course.  I'm
not at all keen on a *statement* timeout on autovacuum; as long as
autovacuum is doing work, I don't want to cancel it.  Also, WTF would we
set it to?

Going the statement timeout route seems like a way to create a LOT of
extra work, troubleshooting, getting it wrong, and releasing patch
updates.  Please let's just create a lock timeout.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Simon Riggs
On Sun, 2011-01-16 at 12:50 -0800, Josh Berkus wrote:
 On 1/16/11 11:19 AM, Simon Riggs wrote:
  I would prefer it if we had a settable lock timeout, as suggested many
  moons ago. When that was discussed before it was said there was no
  difference between a statement timeout and a lock timeout, but I think
  there clearly is, this case being just one example.
 
 Whatever happend to lock timeouts, anyway?  We even had some patches
 floating around for 9.0 and they disappeared.
 
 However, we'd want a separate lock timeout for autovac, of course.  I'm
 not at all keen on a *statement* timeout on autovacuum; as long as
 autovacuum is doing work, I don't want to cancel it.  

 Also, WTF would we
 set it to?

 Going the statement timeout route seems like a way to create a LOT of
 extra work, troubleshooting, getting it wrong, and releasing patch
 updates.  Please let's just create a lock timeout.

I agree with you, but if we want it *this* release, on top of all the
other features we have queued, then I suggest we compromise. If you hold
out for more feature, you may get less.

Statement timeout = 2 * (100ms + autovacuum_vacuum_cost_delay) *
tablesize/(autovacuum_vacuum_cost_limit / vacuum_cost_page_dirty)

-- 
 Simon Riggs   http://www.2ndQuadrant.com/books/
 PostgreSQL Development, 24x7 Support, Training and Services
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-16 Thread Robert Haas
On Sun, Jan 16, 2011 at 8:36 PM, Simon Riggs si...@2ndquadrant.com wrote:
 I agree with you, but if we want it *this* release, on top of all the
 other features we have queued, then I suggest we compromise. If you hold
 out for more feature, you may get less.

 Statement timeout = 2 * (100ms + autovacuum_vacuum_cost_delay) *
 tablesize/(autovacuum_vacuum_cost_limit / vacuum_cost_page_dirty)

I'm inclined to think that would be a very dangerous compromise.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-15 Thread Greg Smith

Josh Berkus wrote:

The lack of vacuum could be occurring for any of 4 reasons:

1) Locking
2) You have a lot of tables and not enough autovac_workers / too much
sleep time
3) You need to autovac this particular table more frequently, since it
gets dirtied really fast
4) The table has been set with special autovac settings which keep it
from being autovac'd

We can currently distinguish between cased 2, 3, 4 based on existing
available facts.  However, distinguishing case 1 from 2 or 3, in
particular, isn't currently possible except by methods which require
collecting a lot of ad-hoc monitoring data over a period of time.  This
makes the effort required for the diagnosis completely out of proportion
with the magnitude of the problem.
  


Since no one coded up something more exciting, I just did a round of 
self-review of the little logging patch I sent upthread to see if it was 
a worthy CF candidate.  I agree that *something* should be done, and I'd 
rather have a logging-based solution than nothing at all here.


So that patch is, to use a polite UK term I've adopted recently, 
rubbish.  But unless I'm confused (it is late), I think there's a 
problem here that's much worse than what you described--in the case 
where someone has grabbed a really large lock on a table that needs 
cleanup at least.  I started with the following test case:


SHOW log_autovacuum_min_duration;
DROP TABLE t;
CREATE TABLE t(s serial,i integer);
INSERT INTO t(i) SELECT generate_series(1,10);
SELECT relname,last_autovacuum,autovacuum_count FROM pg_stat_user_tables 
WHERE relname='t';

DELETE FROM t WHERE s5;
\q

Which, so long as the min duration is 0, does mostly what I expected.  I 
followed immediately by starting a new session, doing:


BEGIN;
LOCK t;

Before autovacuum got to the table; I left this session open, stuck 
there idle in a transaction holding a lock.  I threw some extra logging 
in the code path leading up to where we'd both like the lack of lock 
acquisition to be logged, and it shows the following (at DEBUG3):


DEBUG:  t: vac: 4 (threshold 50), anl: 14 (threshold 50)
DEBUG:  autovacuum: t: vac needed
INFO:  vacuum_rel:  processing 16528
INFO:  vacuum_rel:  trying to open relation 16528

All but the first one of those lines are not in the current code, and as 
noted already that one existing line is at DEBUG3.  This was trying to 
simulate the case you were complaining about:  autovacuum has been 
triggered, it knows there is work to be done, and it can't do said work.


It hasn't triggered the error message I tried to add yet through, 
because it's not hitting that spot.  Look where it's actually blocked at:


$ ps -eaf | grep postgres
gsmith5490  4880  0 04:09 ?00:00:00 postgres: autovacuum 
worker process   gsmith waiting  

$ psql -c select procpid,now() - query_start as runtime,current_query 
from pg_stat_activity where current_query like 'autovacuum%'
procpid | runtime |current_query   
-+-+-

   5490 | 00:11:35.813727 | autovacuum: VACUUM ANALYZE public.t

I haven't just failed to vacuum the table that needs it without any 
entry in the logs saying as much.  I've actually tied down an autovacuum 
worker and kept it from doing anything else until that lock is 
released!  When vacuum_rel inside of vacuum.c does this on a relation it 
wants to acquire a lock on:


   onerel = try_relation_open(relid, lmode);

It just hangs out there forever, if the only issue is not being able to 
get the lock it wants.  The abort code path I tried to insert logging 
into only occurs if the relation was deleted.  Watch what's happened 
while I've been typing:


procpid | runtime |current_query   
-+-+-

   5490 | 00:36:24.426528 | autovacuum: VACUUM ANALYZE public.t

I've now hit the AV worker with a denial of service attack, essentially, 
for over 35 minutes.  I'd bet that if I was a malicious user, I could 
make AV come to a halt altogether for the whole server with this 
approach, even with relatively limited rights on the database.


Does try_relation_open need to have a lock acquisition timeout when AV 
is calling it?  I don't see any other way to get a log mesage noting 
when this problem passed by in there, you can only see it live in these 
process view.  And the current behavior doesn't feel right to me.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-15 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes:
 Does try_relation_open need to have a lock acquisition timeout when AV 
 is calling it?

Hmm.  I think when looking at the AV code, I've always subconsciously
assumed that try_relation_open would fail immediately if it couldn't get
the lock.  That certainly seems like it would be a more appropriate way
to behave than delaying indefinitely.

In practice, the scenario you're worried about seems unlikely to persist
indefinitely: as soon as someone else comes along and blocks behind
autovacuum's request, the deadlock checker will kick AV off the lock
queue.  But it would probably be better not to depend on that.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-15 Thread Robert Haas
On Sat, Jan 15, 2011 at 11:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Greg Smith g...@2ndquadrant.com writes:
 Does try_relation_open need to have a lock acquisition timeout when AV
 is calling it?

 Hmm.  I think when looking at the AV code, I've always subconsciously
 assumed that try_relation_open would fail immediately if it couldn't get
 the lock.  That certainly seems like it would be a more appropriate way
 to behave than delaying indefinitely.

I'm confused how that's not happening already. What does try mean, otherwise?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-08 Thread Dimitri Fontaine
David Fetter da...@fetter.org writes:
 On Fri, Jan 07, 2011 at 08:15:12PM -0500, Greg Smith wrote:
 [1] Silly aside:  I was thinking today that I should draw a chart of
 all the common objections to code that show up here, looking like
 those maps you see when walking into a mall.  Then we can give a
 copy to new submitters with a big you are here X marking where
 they have inadvertently wandered onto.

 Actually, that'd make a great flow chart on a T-shirt :)

Yeah, here's some more visual inspiration to get that:

  http://xkcd.com/844/

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-07 Thread Josh Berkus
Greg,

 It's already possible to detect the main symptom--dead row percentage is
 much higher than the autovacuum threshold, but there's been no recent
 autovacuum.  That makes me less enthusiastic that there's such a genuine
 need to justify the overhead of storing more table stats just to detect
 the same thing a little more easily.

The problem is that while this gives you the symptoms, it doesn't give
you the cause.  The lack of vacuum could be occurring for any of 4 reasons:

1) Locking
2) You have a lot of tables and not enough autovac_workers / too much
sleep time
3) You need to autovac this particular table more frequently, since it
gets dirtied really fast
4) The table has been set with special autovac settings which keep it
from being autovac'd

We can currently distinguish between cased 2, 3, 4 based on existing
available facts.  However, distinguishing case 1 from 2 or 3, in
particular, isn't currently possible except by methods which require
collecting a lot of ad-hoc monitoring data over a period of time.  This
makes the effort required for the diagnosis completely out of proportion
with the magnitude of the problem.

It occurs to me that another way of diagnosis would simply be a way to
cause the autovac daemon to spit out output we could camp on, *without*
requiring the huge volumes of output also required for DEBUG3.  This
brings us back to the logging idea again.

 We could argue both sides of the trade-off of tracking this directly in
 stats for some time, and I'd never expect there to be a clear victory
 for either perspective.  I've run into this vacuum problem a few times,
 but certainly less than I've run into why is the stats table so huge?

I really don't think that argument applies to either patch;
last_autovac_attempt *or* the last_stats_reset time, since neither event
is expected to occur frequently.  If you have last_autovac_attempt (for
example) being updated frequently, you clearly had a db problem bigger
than the size of the stats table.

Given that our road ahead necessarily includes adding more and more
monitoring and admin data to PostgreSQL (because our DBA users demand
it), I think that we should give some thought to the issue of storing
DBA stats in general.  By DBA stats I mean statistics which aren't used
in query planning, but are used in monitoring, trending, and
troubleshooting.  I'm thinking these ought to have their own relation or
relations.


-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-07 Thread Greg Smith

Josh Berkus wrote:

It occurs to me that another way of diagnosis would simply be a way to
cause the autovac daemon to spit out output we could camp on, *without*
requiring the huge volumes of output also required for DEBUG3.  This
brings us back to the logging idea again.
  


Right.  I don't know how much you looked at the little patch I threw out 
there yet, but I think it's a reasonable 90% solution to the problem of 
how do I distinguish between a locking block and other reasons for AV 
not running? without generating a huge amount of log data.  Since 
there's no real overhead or code complexity added by it, it's a 
relatively easy thing to throw in the codebase without annoying anyone.



I really don't think that argument applies to either patch;
last_autovac_attempt *or* the last_stats_reset time, since neither event
is expected to occur frequently.  If you have last_autovac_attempt (for
example) being updated frequently, you clearly had a db problem bigger
than the size of the stats table.
  


It just returns to the usual why make everyone pay overhead for 
something that only happens to a small percentage of users? 
argument.[1]  I personally have all sorts of additional data I'd like to 
instrument in myriad obtrusive spots of code.  All I'm saying is that 
this one in particular I don't quite feel strongly enough about to make 
it the spot I try to establish that foothold at.  If a stats tracking 
patch for this appeared that seemed reasonably well written (i.e. it 
tries to keep the added value NULL whenever it's not relevant), I'd be 
in favor of it going in.  I'm just not so excited about the idea that 
I'm going to write it.  I'm lukewarm to per-table last_stats_reset time 
just because there's so few places I'd actually use it in practice, 
relative to the guaranteed overhead it adds.


[1] Silly aside:  I was thinking today that I should draw a chart of all 
the common objections to code that show up here, looking like those maps 
you see when walking into a mall.  Then we can give a copy to new 
submitters with a big you are here X marking where they have 
inadvertently wandered onto.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-07 Thread David Fetter
On Fri, Jan 07, 2011 at 08:15:12PM -0500, Greg Smith wrote:
 [1] Silly aside:  I was thinking today that I should draw a chart of
 all the common objections to code that show up here, looking like
 those maps you see when walking into a mall.  Then we can give a
 copy to new submitters with a big you are here X marking where
 they have inadvertently wandered onto.

Actually, that'd make a great flow chart on a T-shirt :)

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-06 Thread Greg Smith

Robert Treat wrote:

This is a great use case for user level tracing support. Add a probe
around these bits, and you can capture the information when you need
it.
  


Sure.  I would also like a pony.

--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Supportwww.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-06 Thread Greg Smith

Josh Berkus wrote:

Or should it perhaps be a per-table counter in pg_stat_user_tables,
given your statement above?



Or even a timestamp: last_autovacuum_attempt, which would record the
last time autovacuum was tried.  If that's fairly recent and you have a
large number of dead rows, you know what kind of problem you have and
can turn on debug.
  


These are both reasonable ideas.  But there was just some kickback on 
Tomas's keeping timestamp of the lasts stats reset patch recently, 
from the perspective of trying to limit per-table stats bloat.  I think 
it's relatively easy to make a case that this situation is difficult 
enough to diagnose that a little bit of extra low-level logging is 
worthwhile.  That Josh and I have both been bit by it enough to be 
thinking about patches to make it easier to diagnost suggests it's 
obviously too hard to nail down.  But is this so common and difficult to 
recognize that it's worth making all the table stats bigger?  That's a 
harder call. 

It's already possible to detect the main symptom--dead row percentage is 
much higher than the autovacuum threshold, but there's been no recent 
autovacuum.  That makes me less enthusiastic that there's such a genuine 
need to justify the overhead of storing more table stats just to detect 
the same thing a little more easily.  I've been playing with the Munin 
PG plug-in more recently, and I was just thinking of adding a dead row 
trend graph/threshold to it to address this general area instead.


We could argue both sides of the trade-off of tracking this directly in 
stats for some time, and I'd never expect there to be a clear victory 
for either perspective.  I've run into this vacuum problem a few times, 
but certainly less than I've run into why is the stats table so huge?


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Supportwww.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books



Re: [HACKERS] We need to log aborted autovacuums

2011-01-05 Thread Magnus Hagander
On Wed, Jan 5, 2011 at 07:55, Greg Smith g...@2ndquadrant.com wrote:

snip

 a bit of work in userland, I don't see this even being justified as an INFO
 or LOG level message.  Anytime I can script a SQL-level monitor for
 something that's easy to tie into Nagios or something, I greatly prefer that
 to log file scraping for it anyway.

+INF-1

Log files can be very useful for details, but they suck for noticing
the pattern in the first place :-)

snip

 verbosity of similar failure mode tests that follow it.  Right now failure
 to acquire a lock is just not considered a log-worthy issue, and I agree
 that it's worth considering whether it should be.

Or should it perhaps be a per-table counter in pg_stat_user_tables,
given your statement above?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-05 Thread Josh Berkus

 If you could gather more info on whether this logging catches the
 problem cases you're seeing, that would really be the right test for the
 patch's usefulness.  I'd give you solid 50/50 odds that you've correctly
 diagnosed the issue, and knowing for sure would make advocating for this
 logging a pretty easy sell to me at least.

Well, I already resolved the issue through polling pg_locks.  The issue
was IIT sessions which lasted up to an hour, which we fixed in the
application, so I don't have the test case available anymore.  I'd have
to generate a synthetic test case, and since I agree that a SQL-callable
diagnostic is superior to logging, I don't think we should pursue the
log levels further.

 Or should it perhaps be a per-table counter in pg_stat_user_tables,
 given your statement above?

Or even a timestamp: last_autovacuum_attempt, which would record the
last time autovacuum was tried.  If that's fairly recent and you have a
large number of dead rows, you know what kind of problem you have and
can turn on debug.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-05 Thread Robert Treat
On Wed, Jan 5, 2011 at 2:27 PM, Josh Berkus j...@agliodbs.com wrote:

 If you could gather more info on whether this logging catches the
 problem cases you're seeing, that would really be the right test for the
 patch's usefulness.  I'd give you solid 50/50 odds that you've correctly
 diagnosed the issue, and knowing for sure would make advocating for this
 logging a pretty easy sell to me at least.

 Well, I already resolved the issue through polling pg_locks.  The issue
 was IIT sessions which lasted up to an hour, which we fixed in the
 application, so I don't have the test case available anymore.  I'd have
 to generate a synthetic test case, and since I agree that a SQL-callable
 diagnostic is superior to logging, I don't think we should pursue the
 log levels further.


This is a great use case for user level tracing support. Add a probe
around these bits, and you can capture the information when you need
it.

Robert Treat
http://www.xzilla.net

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-05 Thread Josh Berkus

 This is a great use case for user level tracing support. Add a probe
 around these bits, and you can capture the information when you need
 it.

Yeah, would be lovely if user-level tracing existed on all platforms.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2011-01-04 Thread Greg Smith

Josh Berkus wrote:

I've been trying to diagnose in a production database why certain tables
never get autovacuumed despite having a substantial % of updates.  The
obvious reason is locks blocking autovacuum from vacuuming the table ...
  


Missed this dicussion when it popped up but have plenty to say about it 
now.  What I do here is look for such anomolies using 
pg_stat_user_tables, that the dead rows number has exceeded the vacuum 
threshold.  That comparison is logged in the code at DEBUG3:


   elog(DEBUG3, %s: vac: %.0f (threshold %.0f), anl: %.0f 
(threshold %.0f),

NameStr(classForm-relname),
vactuples, vacthresh, anltuples, anlthresh);

But a rough computation isn't too hard to derive in a report, so long as 
you haven't customized per-table parameters.  As you suggested in your 
message, the really bad cases here stick out a whole lot.  If you pay 
the slightest amount of attention to the dead row percentages they jump 
right out at you.  This all works easily on any version back to 8.3.  
Not having as much relevant data stored in pg_stat_user_tables makes the 
problem cases less obvious to spot in older versions.


If I start seeing these badly maintained tables and suspect locking is 
getting in the way, I then dump traces from pg_locks+pg_stat_activity 
often enough that I can estimate how often someone has an interfering 
lock and what they're doing.


Should the log level on this message go up from DEBUG3?  I could see 
rewriting it so that it logs at DEBUG1 instead when 
Log_autovacuum_min_duration is set *and* when the trigger threshold is 
crossed, and at DEBUG3 the rest of the time.  Given you can derive this 
with a bit of work in userland, I don't see this even being justified as 
an INFO or LOG level message.  Anytime I can script a SQL-level monitor 
for something that's easy to tie into Nagios or something, I greatly 
prefer that to log file scraping for it anyway.



What I'd like to do is add some logging code to autovacuum.c so that if
log_autovacuum is any value other than -1, failure to vacuum due to
locks gets logged.   Does this make sense?
  


The general idea is interesting and probably more productive for the 
situation you theorize is happening then messing with the logging 
discussed above.  But that's not where the code needs to go--the lock 
isn't opened until much further down the function call stack.  Attached 
quickie and only tested for compilation patch probably does what you 
want here.  Since this would eliminate the messy follow-up step I 
sometimes have gone through, dumping pg_locks data to confirm or rule 
out locking issues messing with AV processing, I can see some potential 
that it may have simplified situations I've ran into in the past.  And 
it's not out of line with the logging verbosity of similar failure mode 
tests that follow it.  Right now failure to acquire a lock is just not 
considered a log-worthy issue, and I agree that it's worth considering 
whether it should be. 

If you could gather more info on whether this logging catches the 
problem cases you're seeing, that would really be the right test for the 
patch's usefulness.  I'd give you solid 50/50 odds that you've correctly 
diagnosed the issue, and knowing for sure would make advocating for this 
logging a pretty easy sell to me at least.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services and Supportwww.2ndQuadrant.us
PostgreSQL 9.0 High Performance: http://www.2ndQuadrant.com/books

diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 2f68df4..2124e25 100644
*** a/src/backend/commands/vacuum.c
--- b/src/backend/commands/vacuum.c
*** vacuum_rel(Oid relid, VacuumStmt *vacstm
*** 851,856 
--- 851,864 
  	{
  		PopActiveSnapshot();
  		CommitTransactionCommand();
+ 
+ 		if (IsAutoVacuumWorkerProcess()  Log_autovacuum_min_duration = 0)
+ 		{
+ 		ereport(INFO,
+ 			(errmsg(skipping \%s\ --- cannot open or obtain lock,
+ 	RelationGetRelationName(onerel;
+ 		}
+ 
  		return;
  	}
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2010-11-17 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 I've been trying to diagnose in a production database why certain tables
 never get autovacuumed despite having a substantial % of updates.  The
 obvious reason is locks blocking autovacuum from vacuuming the table ...
 but the trick is we don't log such blocking behavior, at all.  This
 means that there is no possible way for a user to figure out *why* his
 tables aren't getting autovacuumed.

 And yes, this is a common problem.  A quick survey on IRC found 3 active
 users on channel (out of 20 or so) who'd encountered it.  The current
 case I'm looking at is a table with over 70% bloat which hasn't been
 autovacuumed since the database was upgraded a month ago.

 What I'd like to do is add some logging code to autovacuum.c so that if
 log_autovacuum is any value other than -1, failure to vacuum due to
 locks gets logged.   Does this make sense?

It's hard to tell, because you're just handwaving about what it is you
think isn't being logged; nor is it clear whether you have any evidence
that locks are the problem.  Offhand I'd think it at least as likely
that autovacuum thinks it doesn't need to do anything, perhaps because
of a statistics issue.  There *is* an elog(DEBUG3) in autovacuum.c
that reports whether autovac thinks a table needs vacuumed/analyzed ...
maybe that needs to be a tad more user-accessible.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2010-11-17 Thread Josh Berkus

 It's hard to tell, because you're just handwaving about what it is you
 think isn't being logged; nor is it clear whether you have any evidence
 that locks are the problem.  Offhand I'd think it at least as likely
 that autovacuum thinks it doesn't need to do anything, perhaps because
 of a statistics issue.  There *is* an elog(DEBUG3) in autovacuum.c
 that reports whether autovac thinks a table needs vacuumed/analyzed ...
 maybe that needs to be a tad more user-accessible.

Yeah, it would be really good to be able to log that without bumping the
log levels of the server in general to DEBUG3.  On a busy production
server, using any of the DEBUG levels is pretty much out of the question
... they can produce up to 1GB/minute in output.

-- 
  -- Josh Berkus
 PostgreSQL Experts Inc.
 http://www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2010-11-17 Thread Tom Lane
Josh Berkus j...@agliodbs.com writes:
 There *is* an elog(DEBUG3) in autovacuum.c
 that reports whether autovac thinks a table needs vacuumed/analyzed ...
 maybe that needs to be a tad more user-accessible.

 Yeah, it would be really good to be able to log that without bumping the
 log levels of the server in general to DEBUG3.  On a busy production
 server, using any of the DEBUG levels is pretty much out of the question
 ... they can produce up to 1GB/minute in output.

Well, the way to deal with that would be to add a GUC that enables
reporting of those messages at LOG level.  But it's a bit hard to argue
that we need such a thing without more evidence.  Maybe you could just
locally modify the DEBUG3 to LOG and see whether it teaches you
anything?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2010-11-17 Thread Itagaki Takahiro
On Thu, Nov 18, 2010 at 08:35, Tom Lane t...@sss.pgh.pa.us wrote:
 Yeah, it would be really good to be able to log that without bumping the
 log levels of the server in general to DEBUG3.

 Well, the way to deal with that would be to add a GUC that enables
 reporting of those messages at LOG level.  But it's a bit hard to argue
 that we need such a thing without more evidence.  Maybe you could just
 locally modify the DEBUG3 to LOG and see whether it teaches you
 anything?

How about adding a special role for autovacuum, and running autovacuum
by the role instead of the database owner? If we have autovacuum role
for autovacuum processes, we could set log_min_messages to DEBUG3 for
only the role with per-user configuration.

-- 
Itagaki Takahiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] We need to log aborted autovacuums

2010-11-17 Thread Tom Lane
Itagaki Takahiro itagaki.takah...@gmail.com writes:
 On Thu, Nov 18, 2010 at 08:35, Tom Lane t...@sss.pgh.pa.us wrote:
 Well, the way to deal with that would be to add a GUC that enables
 reporting of those messages at LOG level.  But it's a bit hard to argue
 that we need such a thing without more evidence.  Maybe you could just
 locally modify the DEBUG3 to LOG and see whether it teaches you
 anything?

 How about adding a special role for autovacuum, and running autovacuum
 by the role instead of the database owner? If we have autovacuum role
 for autovacuum processes, we could set log_min_messages to DEBUG3 for
 only the role with per-user configuration.

That seems like a major kluge ... and anyway it doesn't fix the problem,
because DEBUG3 is still going to result in a lot of unwanted log output,
even if it's confined to autovacuum.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers