Re: [HACKERS] Preventing duplicate vacuums?

2004-02-09 Thread Robert Treat
On Sat, 2004-02-07 at 02:07, Tom Lane wrote:
 Robert Treat [EMAIL PROTECTED] writes:
  Don't know if I would agree for sure, but i the second vacuum could see
  that it is being blocked by the current vacuum, exiting out would be a
  bonus, since in most scenarios you don't need to run that second vacuum
  so it just ends up wasting resources (or clogging other things up with
  it lock)
 
 This would be reasonable if we could do it, but the present lock manager
 doesn't provide any way to tell what sort of lock is blocking you.
 There are some cases in which it isn't obvious anyway.  For instance,
 suppose an ALTER TABLE (which wants an exclusive lock) is queued up
 waiting for the currently-running VACUUM.  An incoming new VACUUM
 request will queue behind the ALTER.  Which lock would you say is
 blocking it ... and does an honest answer to that question jibe with
 your preference about whether the second VACUUM should give up?
 

ISTM that both sides have trouble, since you could just as easily have
vacuum queued up behind an alter we your second vacuum comes in...

 A chintzy way out would be for VACUUM to just exit if it can't
 immediately acquire lock, regardless of the cause.  This wouldn't be
 too useful for VACUUM FULL, but plain VACUUM is not blocked by very many
 common operations other than another VACUUM, so most of the time it
 would do what you want.  I could possibly be talked into supporting an
 option to do that.
 

This seems pretty useful to me. I thought about doing things like
setting statement_timeout to some low number but that would generally
cause the vacuum to timeout as well.  Looking through postgresql.conf
nothing else seems to apply... ISTR people asking for a general
lock_timeout param that would cancel queries if they wait for a lock
longer than x milliseconds... this seems like very similar
functionality...  

Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-07 Thread Tom Lane
Robert Treat [EMAIL PROTECTED] writes:
 Don't know if I would agree for sure, but i the second vacuum could see
 that it is being blocked by the current vacuum, exiting out would be a
 bonus, since in most scenarios you don't need to run that second vacuum
 so it just ends up wasting resources (or clogging other things up with
 it lock)

This would be reasonable if we could do it, but the present lock manager
doesn't provide any way to tell what sort of lock is blocking you.
There are some cases in which it isn't obvious anyway.  For instance,
suppose an ALTER TABLE (which wants an exclusive lock) is queued up
waiting for the currently-running VACUUM.  An incoming new VACUUM
request will queue behind the ALTER.  Which lock would you say is
blocking it ... and does an honest answer to that question jibe with
your preference about whether the second VACUUM should give up?

A chintzy way out would be for VACUUM to just exit if it can't
immediately acquire lock, regardless of the cause.  This wouldn't be
too useful for VACUUM FULL, but plain VACUUM is not blocked by very many
common operations other than another VACUUM, so most of the time it
would do what you want.  I could possibly be talked into supporting an
option to do that.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-06 Thread Robert Treat
On Thu, 2004-02-05 at 16:51, Josh Berkus wrote:
 Tom,
 
  Yes we do: there's a lock.
 
 Sorry, bad test.  Forget I said anything.
 
 Personally, I would like to have the 2nd vacuum error out instead of blocking.  
 However, I'll bet that a lot of people won't agree with me.
 

Don't know if I would agree for sure, but i the second vacuum could see
that it is being blocked by the current vacuum, exiting out would be a
bonus, since in most scenarios you don't need to run that second vacuum
so it just ends up wasting resources (or clogging other things up with
it lock)

Robert Treat
-- 
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Preventing duplicate vacuums?

2004-02-06 Thread Thomas Swan
Robert Treat wrote:

On Thu, 2004-02-05 at 16:51, Josh Berkus wrote:
 

Tom,

   

Yes we do: there's a lock.
 

Sorry, bad test.  Forget I said anything.

Personally, I would like to have the 2nd vacuum error out instead of blocking.  
However, I'll bet that a lot of people won't agree with me.

   

Don't know if I would agree for sure, but i the second vacuum could see
that it is being blocked by the current vacuum, exiting out would be a
bonus, since in most scenarios you don't need to run that second vacuum
so it just ends up wasting resources (or clogging other things up with
it lock)
 

What about a situation where someone would have lazy vacuums cron'd and 
it takes longer to complete the vacuum than the interval between 
vacuums.  You could wind up with an ever increasing queue of vacuums.

Erroring out with a vacuum already in progress might be useful.

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-06 Thread Joshua D. Drake
 What about a situation where someone would have lazy vacuums cron'd and 
 it takes longer to complete the vacuum than the interval between 
 vacuums.  You could wind up with an ever increasing queue of vacuums.
 
 Erroring out with a vacuum already in progress might be useful.

I have seen this many times with customers as their traffic on the 
database grows. A simple check would be of great, great use.

Sincerely,

Joshua D. Drake



 
 
 ---(end of broadcast)---
 TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
 

-- 
Co-Founder
Command Prompt, Inc.
The wheel's spinning but the hamster's dead


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-05 Thread Rod Taylor
On Thu, 2004-02-05 at 15:37, Josh Berkus wrote:
 Folks,
 
 Just occurred to me that we have no code to prevent a user from running two 
 simultaneos lazy vacuums on the same table.I can't think of any 
 circumstance why running two vacuums would be desirable behavior; how 
 difficult would it be to make this an exception?

You have a 8 billion row table with some very high turn over tuples
(lots of updates to a few thousand rows). A partial or targeted vacuum
would be best, failing that you kick them off fairly frequently,
especially if IO isn't really an issue.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-05 Thread Josh Berkus
Rod,

 You have a 8 billion row table with some very high turn over tuples
 (lots of updates to a few thousand rows). A partial or targeted vacuum
 would be best, failing that you kick them off fairly frequently,
 especially if IO isn't really an issue.

Yes, but we don't have partial or targeted vacuums yet.  When tablespaces is 
finished, presumably the lock would be per tablespace.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [HACKERS] Preventing duplicate vacuums?

2004-02-05 Thread Josh Berkus
Tom,

 Yes we do: there's a lock.

Sorry, bad test.  Forget I said anything.

Personally, I would like to have the 2nd vacuum error out instead of blocking.  
However, I'll bet that a lot of people won't agree with me.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org