Re: [PERFORM] Cost-Based Vacuum Delay tuning

2007-12-08 Thread Alvaro Herrera
Guillaume Cottenceau wrote:

 I have noticed that others (Alvaro, Joshua) suggest to set
 vacuum_cost_delay as low as 10 or 20 ms,

My suggestion is to set it as *high* as 10 or 20 ms.  Compared to the
original default of 0ms.  This is just because I'm lazy enough not to
have done any measuring of the exact consequences of such a setting, and
out of fear that a very high value could provoke some sort of disaster.

I must admit that changing the vacuum_delay_limit isn't something that
I'm used to recommending.  Maybe it does make sense considering
readahead effects and the new ring buffer stuff.


-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
La experiencia nos dice que el hombre peló millones de veces las patatas,
pero era forzoso admitir la posibilidad de que en un caso entre millones,
las patatas pelarían al hombre (Ijon Tichy)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] Cost-Based Vacuum Delay tuning

2007-12-07 Thread Guillaume Cottenceau
Hi,

I'm currently trying to tune the Cost-Based Vacuum Delay in a
8.2.5 server. The aim is to reduce as much as possible the
performance impact of vacuums on application queries, with the
background idea of running autovacuum as much as possible[1].

My test involves vacuuming a large table, and measuring the
completion time, as the vacuuming proceeds, of a rather long
running application query (involving a table different from the
one being vacuumed) which cannot fit entirely in buffers (and the
completion time of the vacuum, because it needs not be too slow,
of course).

I ran my tests with a few combinations of
vacuum_cost_delay/vacuum_cost_limit, while keeping the other
parameters set to the default from the 8.2.5 tarball:

vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20

The completion time of the query is about 16 seconds in
isolation. With a vacuuming proceeding, here are the results:

  vacuum_cost_delay/vacuum_cost_limit  (deactivated)  20/200   40/200   
100/1000   150/1000   200/1000   300/1000

VACUUM ANALYZE time54 s112 s188 s109 s  
 152 s  190 s  274 s
SELECT time50 s 28 s 26 s 24 s  
  22 s   20 s   19 s

I have noticed that others (Alvaro, Joshua) suggest to set
vacuum_cost_delay as low as 10 or 20 ms, however in my situation
I'd like to lower the performance impact in application queries
and will probably choose 150/1000 where only a +40% is seen -
I'm curious if anyone else has followed the same path, or is
there any outstanding flaw I've missed here? I'm talking
outstanding, as of course any local decision may be different in
the hope of favouring a different database/application behaviour.


Other than that, it's the results obtained with the design
principle of Cost-Base Vacuum Delay, which I find a little
surprising. Of course, I think it has been thought through a lot,
and my observations are probably naive, but I'm going to throw my
ideas anyway, who knows.

I'd think that it would be possible to lower yet again the impact
of vacuuming on other queries, while keeping a vacuuming time
with little overhead, if dynamically changing the delays related
to database activity, rather than using fixed costs and delays.
For example, before and after each vacuum sleep delay is
completed, pg could:

- check the amount of currently running queries
  (pg_stat_activity), and continue sleeping if it is above a
  configured threshold; by following this path, databases with
  peak activities could use a threshold of 1 and have zero
  ressource comsumption for vacuuming during peaks, still having
  nearly no time completion overhead for vacuuming out of peaks
  (since the check is performed also before the sleep delay,
  which would be deactivated if no queries are running); if we
  can afford a luxury implementation, we could always have a
  maximum sleep time configuration, which would allow vacuuming
  to proceed a little bit even when there's no timeframe with low
  enough database activity

- alternatively, pg could make use of some longer term statistics
  (load average, IO statistics) to dynamically pause the
  vacuuming - this I guess is related to the host OS and probably
  more difficult to have working correctly with multiple disks
  and/or processes running - however, if you want high
  performance from PostgreSQL, you probably won't host other IO
  applications on the same disk(s)


While I'm at it, a different Cost-Based Vacuum Delay issue:
VACUUM FULL also follows the Cost-Based Vacuum Delay tunings.
While it makes total sense when you want to perform a query on
another table, it becomes a problem when your query is waiting
for the exclusive lock on the vacuumed table. Potentially, you
will have the vacuuming proceeding slowly because of the
Cost-Based Vacuum Delay, and a blocked application because the
application queries are just waiting.

I'm wondering if it would not be possible to dynamically ignore
(or lower, if it makes more sense?) the Cost-Based Vacuum Delay
during vacuum full, if a configurable amount of queries are
waiting for the lock?

(please save yourself from answering you should never run VACUUM
FULL if you're vacuuming enough - as long as VACUUM FULL is
available in PostgreSQL, there's no reason to not make it as
practically usable as possible, albeit with low dev priority)


Ref: 
[1] inspired by http://developer.postgresql.org/~wieck/vacuum_cost/

-- 
Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company
Av. de la Gare 10, 1003 Lausanne, Switzerland

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Cost-Based Vacuum Delay tuning

2007-12-07 Thread Erik Jones


On Dec 7, 2007, at 4:50 AM, Guillaume Cottenceau wrote:


Hi,

I'm currently trying to tune the Cost-Based Vacuum Delay in a
8.2.5 server. The aim is to reduce as much as possible the
performance impact of vacuums on application queries, with the
background idea of running autovacuum as much as possible[1].

My test involves vacuuming a large table, and measuring the
completion time, as the vacuuming proceeds, of a rather long
running application query (involving a table different from the
one being vacuumed) which cannot fit entirely in buffers (and the
completion time of the vacuum, because it needs not be too slow,
of course).

I ran my tests with a few combinations of
vacuum_cost_delay/vacuum_cost_limit, while keeping the other
parameters set to the default from the 8.2.5 tarball:

vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20

The completion time of the query is about 16 seconds in
isolation. With a vacuuming proceeding, here are the results:

  vacuum_cost_delay/vacuum_cost_limit  (deactivated)  20/200
40/200   100/1000   150/1000   200/1000   300/1000


VACUUM ANALYZE time54 s112 s188  
s109 s   152 s  190 s  274 s
SELECT time50 s 28 s 26  
s 24 s22 s   20 s   19 s


While you do mention that the table you're running your select on is  
too big to fit in the shared_buffers, the drop in time between the  
first run and the rest most likely still reflects the fact that when  
running those tests successively a good portion of the table will  
already be in shared_buffers as well as being in the filesystem  
cache, i.e. very little of the runs after the first will have to hit  
the disk much.



I have noticed that others (Alvaro, Joshua) suggest to set
vacuum_cost_delay as low as 10 or 20 ms, however in my situation
I'd like to lower the performance impact in application queries
and will probably choose 150/1000 where only a +40% is seen -
I'm curious if anyone else has followed the same path, or is
there any outstanding flaw I've missed here? I'm talking
outstanding, as of course any local decision may be different in
the hope of favouring a different database/application behaviour.


Other than that, it's the results obtained with the design
principle of Cost-Base Vacuum Delay, which I find a little
surprising. Of course, I think it has been thought through a lot,
and my observations are probably naive, but I'm going to throw my
ideas anyway, who knows.

I'd think that it would be possible to lower yet again the impact
of vacuuming on other queries, while keeping a vacuuming time
with little overhead, if dynamically changing the delays related
to database activity, rather than using fixed costs and delays.
For example, before and after each vacuum sleep delay is
completed, pg could:

- check the amount of currently running queries
  (pg_stat_activity), and continue sleeping if it is above a
  configured threshold; by following this path, databases with
  peak activities could use a threshold of 1 and have zero
  ressource comsumption for vacuuming during peaks, still having
  nearly no time completion overhead for vacuuming out of peaks
  (since the check is performed also before the sleep delay,
  which would be deactivated if no queries are running); if we
  can afford a luxury implementation, we could always have a
  maximum sleep time configuration, which would allow vacuuming
  to proceed a little bit even when there's no timeframe with low
  enough database activity

- alternatively, pg could make use of some longer term statistics
  (load average, IO statistics) to dynamically pause the
  vacuuming - this I guess is related to the host OS and probably
  more difficult to have working correctly with multiple disks
  and/or processes running - however, if you want high
  performance from PostgreSQL, you probably won't host other IO
  applications on the same disk(s)


These ideas have been discussed much.  Look in the archives to the  
beginning of this year.  I think the general consensus was that it  
would be good have multiple autovacuum workers that could be tuned  
for different times or workloads.  I know Alvarro was going to work  
on something along those lines but I'm not sure what's made it into  
8.3 or what's still definitely planned for the future.



While I'm at it, a different Cost-Based Vacuum Delay issue:
VACUUM FULL also follows the Cost-Based Vacuum Delay tunings.
While it makes total sense when you want to perform a query on
another table, it becomes a problem when your query is waiting
for the exclusive lock on the vacuumed table. Potentially, you
will have the vacuuming proceeding slowly because of the
Cost-Based Vacuum Delay, and a blocked application because the
application queries are just waiting.

I'm wondering if it would not be possible to dynamically ignore
(or lower, if it makes more sense?) the Cost-Based 

Re: [PERFORM] Cost-Based Vacuum Delay tuning

2007-12-07 Thread Erik Jones


On Dec 7, 2007, at 10:44 AM, Guillaume Cottenceau wrote:


Erik Jones erik 'at' myemma.com writes:


  vacuum_cost_delay/vacuum_cost_limit  (deactivated)  20/200
40/200   100/1000   150/1000   200/1000   300/1000

VACUUM ANALYZE time54 s112 s188
s109 s   152 s  190 s  274 s
SELECT time50 s 28 s 26
s 24 s22 s   20 s   19 s


While you do mention that the table you're running your select on is
too big to fit in the shared_buffers, the drop in time between the
first run and the rest most likely still reflects the fact that when


These figures don't show a difference between first run and
subsequent runs. For each parameter tuning, a couple of runs are
fired after database restart, and once the value is approximately
constant, it's picked and put in this table. The deactivated
shows the (stable, from subsequent runs) figure when vacuum delay
is disabled (vacuum_cost_delay parameter quoted), not the first
run, if that's where the confusion came from.


It was.


Is it on pgsql-hackers? I haven't found much stuff in
pgsql-performance while looking for vacuum_cost_delay tuning.


would be good have multiple autovacuum workers that could be tuned
for different times or workloads.  I know Alvarro was going to work


Sounds interesting.


Run the initial archive search against pgsql-general over the last  
year for a thread called 'Autovacuum Improvements'



I'm wondering if it would not be possible to dynamically ignore
(or lower, if it makes more sense?) the Cost-Based Vacuum Delay
during vacuum full, if a configurable amount of queries are
waiting for the lock?

(please save yourself from answering you should never run VACUUM
FULL if you're vacuuming enough - as long as VACUUM FULL is
available in PostgreSQL, there's no reason to not make it as
practically usable as possible, albeit with low dev priority)


Ok, I won't say what you said not to say.  But, I will say that I
don't agree with you're conjecture that VACUUM FULL should be made
more lightweight, it's like using dynamite to knock a whole in a wall
for a window.


Thanks for opening a new kind of trol^Hargument against VACUUM
FULL, that one's more fresh (at least to me, who doesn't follow
the list too close anyway).



Just for the record, I inherited a poorly (actually, not would
be more appropriate) tuned database, containing more than 90% of
dead tuples on large tables, and I witnessed quite some
performance improvement while I could fix that.


If you really want the VACUUM FULL effect without having to deal with  
vacuum_cost_delay, use CLUSTER.  It also re-writes the table and,  
AFAIK, is not subject to any of the vacuum related configuration  
parameters.  I'd argue that if you really need VACUUM FULL, you may  
as well use CLUSTER to get a good ordering of the re-written table.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate  market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Cost-Based Vacuum Delay tuning

2007-12-07 Thread Guillaume Cottenceau
Erik Jones erik 'at' myemma.com writes:

   vacuum_cost_delay/vacuum_cost_limit  (deactivated)  20/200
 40/200   100/1000   150/1000   200/1000   300/1000

 VACUUM ANALYZE time54 s112 s188
 s109 s   152 s  190 s  274 s
 SELECT time50 s 28 s 26
 s 24 s22 s   20 s   19 s

 While you do mention that the table you're running your select on is
 too big to fit in the shared_buffers, the drop in time between the
 first run and the rest most likely still reflects the fact that when

These figures don't show a difference between first run and
subsequent runs. For each parameter tuning, a couple of runs are
fired after database restart, and once the value is approximately
constant, it's picked and put in this table. The deactivated
shows the (stable, from subsequent runs) figure when vacuum delay
is disabled (vacuum_cost_delay parameter quoted), not the first
run, if that's where the confusion came from.

 running those tests successively a good portion of the table will
 already be in shared_buffers as well as being in the filesystem
 cache, i.e. very little of the runs after the first will have to hit

A dd sized at the total RAM size is run between each test (not
between each parameter tuning, between each *query test*), to
remove the OS disk cache effect. Of course, the PostgreSQL
caching effect cannot be removed (maybe, it shouldn't, as after
all this caching is here to improve performance), but the query
is selected to generate a lot of disk activity even between each
run (that's why I said a query which cannot fit entirely in
buffers).

 the disk much.

I have of course checked that the subsequent runs mean
essentially disk activity, not CPU activity.

 - alternatively, pg could make use of some longer term statistics
   (load average, IO statistics) to dynamically pause the
   vacuuming - this I guess is related to the host OS and probably
   more difficult to have working correctly with multiple disks
   and/or processes running - however, if you want high
   performance from PostgreSQL, you probably won't host other IO
   applications on the same disk(s)

 These ideas have been discussed much.  Look in the archives to the
 beginning of this year.  I think the general consensus was that it

Is it on pgsql-hackers? I haven't found much stuff in
pgsql-performance while looking for vacuum_cost_delay tuning.

 would be good have multiple autovacuum workers that could be tuned
 for different times or workloads.  I know Alvarro was going to work

Sounds interesting.

 I'm wondering if it would not be possible to dynamically ignore
 (or lower, if it makes more sense?) the Cost-Based Vacuum Delay
 during vacuum full, if a configurable amount of queries are
 waiting for the lock?

 (please save yourself from answering you should never run VACUUM
 FULL if you're vacuuming enough - as long as VACUUM FULL is
 available in PostgreSQL, there's no reason to not make it as
 practically usable as possible, albeit with low dev priority)

 Ok, I won't say what you said not to say.  But, I will say that I
 don't agree with you're conjecture that VACUUM FULL should be made
 more lightweight, it's like using dynamite to knock a whole in a wall
 for a window.

Thanks for opening a new kind of trol^Hargument against VACUUM
FULL, that one's more fresh (at least to me, who doesn't follow
the list too close anyway).

Just for the record, I inherited a poorly (actually, not would
be more appropriate) tuned database, containing more than 90% of
dead tuples on large tables, and I witnessed quite some
performance improvement while I could fix that.

-- 
Guillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company
Av. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] cost-based vacuum

2005-07-13 Thread Simon Riggs
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
 It appears not to matter whether it is one of the tables
 being written to that is ANALYZEd.  I can ANALYZE an old,
 quiescent table, or a system table and see this effect.

Can you confirm that this effect is still seen even when the ANALYZE
doesn't touch *any* of the tables being accessed?

 - this is a dual Xeon. 

Is that Xeon MP then?

 - Looking at oprofile reports for 10-minute runs of a
   database-wide VACUUM with vacuum_cost_delay=0 and 1000,
   shows the latter spending a lot of time in LWLockAcquire
   and LWLockRelease (20% each vs. 2%).

Is this associated with high context switching also?

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] cost-based vacuum

2005-07-13 Thread Ian Westmacott
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
 On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
  It appears not to matter whether it is one of the tables
  being written to that is ANALYZEd.  I can ANALYZE an old,
  quiescent table, or a system table and see this effect.
 
 Can you confirm that this effect is still seen even when the ANALYZE
 doesn't touch *any* of the tables being accessed?

Yes.

  - this is a dual Xeon. 
 
 Is that Xeon MP then?

Yes.

  - Looking at oprofile reports for 10-minute runs of a
database-wide VACUUM with vacuum_cost_delay=0 and 1000,
shows the latter spending a lot of time in LWLockAcquire
and LWLockRelease (20% each vs. 2%).
 
 Is this associated with high context switching also?

Yes, it appears that context switches increase up to 4-5x
during cost-based ANALYZE.

--Ian



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] cost-based vacuum

2005-07-13 Thread Tom Lane
Ian Westmacott [EMAIL PROTECTED] writes:
 On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
 On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
 It appears not to matter whether it is one of the tables
 being written to that is ANALYZEd.  I can ANALYZE an old,
 quiescent table, or a system table and see this effect.
 
 Can you confirm that this effect is still seen even when the ANALYZE
 doesn't touch *any* of the tables being accessed?

 Yes.

This really isn't making any sense at all.  I took another look through
the vacuum_delay_point() calls, and I can see a couple that are
questionably placed:

* the one in count_nondeletable_pages() is done while we are holding
exclusive lock on the table; we might be better off not to delay there,
so as not to block non-VACUUM processes longer than we have to.

* the ones in hashbulkdelete and rtbulkdelete are done while holding
various forms of exclusive locks on the index (this was formerly true
of gistbulkdelete as well).  Again it might be better not to delay.

However, these certainly do not explain Ian's problem, because (a) these
only apply to VACUUM, not ANALYZE; (b) they would only lock the table
being VACUUMed, not other ones; (c) if these locks were to block the
reader or writer thread, it'd manifest as blocking on a semaphore, not
as a surge in LWLock thrashing.

 Is that Xeon MP then?

 Yes.

The LWLock activity is certainly suggestive of prior reports of
excessive buffer manager lock contention, but it makes *no* sense that
that would be higher with vacuum cost delay than without.  I'd have
expected the other way around.

I'd really like to see a test case for this...

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] cost-based vacuum

2005-07-13 Thread Simon Riggs
On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
 Ian Westmacott [EMAIL PROTECTED] writes:
  On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
  On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
  It appears not to matter whether it is one of the tables
  being written to that is ANALYZEd.  I can ANALYZE an old,
  quiescent table, or a system table and see this effect.
  
  Can you confirm that this effect is still seen even when the ANALYZE
  doesn't touch *any* of the tables being accessed?
 
  Yes.
 
 This really isn't making any sense at all. 

Agreed. I think all of this indicates that some wierdness (technical
term) is happening at a different level in the computing stack. I think
all of this points fairly strongly to it *not* being a PostgreSQL
algorithm problem, i.e. if the code was executed by an idealised Knuth-
like CPU then we would not get this problem. Plus, I have faith that if
it was a problem in that plane then you or another would have
uncovered it by now.

 However, these certainly do not explain Ian's problem, because (a) these
 only apply to VACUUM, not ANALYZE; (b) they would only lock the table
 being VACUUMed, not other ones; (c) if these locks were to block the
 reader or writer thread, it'd manifest as blocking on a semaphore, not
 as a surge in LWLock thrashing.

I've seen enough circumstantial evidence to connect the time spent
inside LWLockAcquire/Release as being connected to the Semaphore ops
within them, not the other aspects of the code.

Months ago we discussed the problem of false sharing on closely packed
arrays of shared variables because of the large cache line size of the
Xeon MP. When last we touched on that thought, I focused on the thought
that the LWLock array was too tightly packed for the predefined locks.
What we didn't discuss (because I was too focused on the other array)
was the PGPROC shared array is equally tightly packed, which could give
problems on the semaphores in LWLock.

Intel says fairly clearly that this would be an issue. 

  Is that Xeon MP then?
 
  Yes.
 
 The LWLock activity is certainly suggestive of prior reports of
 excessive buffer manager lock contention, but it makes *no* sense that
 that would be higher with vacuum cost delay than without.  I'd have
 expected the other way around.
 
 I'd really like to see a test case for this...

My feeling is that a micro-architecture test would be more likely to
reveal some interesting information.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] cost-based vacuum

2005-07-13 Thread Ian Westmacott
I can at least report that the problem does not seem to
occur with Postgres 8.0.1 running on a dual Opteron.

--Ian


On Wed, 2005-07-13 at 16:39, Simon Riggs wrote:
 On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
  Ian Westmacott [EMAIL PROTECTED] writes:
   On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
   On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
   It appears not to matter whether it is one of the tables
   being written to that is ANALYZEd.  I can ANALYZE an old,
   quiescent table, or a system table and see this effect.
   
   Can you confirm that this effect is still seen even when the ANALYZE
   doesn't touch *any* of the tables being accessed?
  
   Yes.
  
  This really isn't making any sense at all. 
 
 Agreed. I think all of this indicates that some wierdness (technical
 term) is happening at a different level in the computing stack. I think
 all of this points fairly strongly to it *not* being a PostgreSQL
 algorithm problem, i.e. if the code was executed by an idealised Knuth-
 like CPU then we would not get this problem. Plus, I have faith that if
 it was a problem in that plane then you or another would have
 uncovered it by now.
 
  However, these certainly do not explain Ian's problem, because (a) these
  only apply to VACUUM, not ANALYZE; (b) they would only lock the table
  being VACUUMed, not other ones; (c) if these locks were to block the
  reader or writer thread, it'd manifest as blocking on a semaphore, not
  as a surge in LWLock thrashing.
 
 I've seen enough circumstantial evidence to connect the time spent
 inside LWLockAcquire/Release as being connected to the Semaphore ops
 within them, not the other aspects of the code.
 
 Months ago we discussed the problem of false sharing on closely packed
 arrays of shared variables because of the large cache line size of the
 Xeon MP. When last we touched on that thought, I focused on the thought
 that the LWLock array was too tightly packed for the predefined locks.
 What we didn't discuss (because I was too focused on the other array)
 was the PGPROC shared array is equally tightly packed, which could give
 problems on the semaphores in LWLock.
 
 Intel says fairly clearly that this would be an issue. 
 
   Is that Xeon MP then?
  
   Yes.
  
  The LWLock activity is certainly suggestive of prior reports of
  excessive buffer manager lock contention, but it makes *no* sense that
  that would be higher with vacuum cost delay than without.  I'd have
  expected the other way around.
  
  I'd really like to see a test case for this...
 
 My feeling is that a micro-architecture test would be more likely to
 reveal some interesting information.
 
 Best Regards, Simon Riggs
 
 
 ---(end of broadcast)---
 TIP 6: explain analyze is your friend


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] cost-based vacuum

2005-07-12 Thread Simon Riggs
On Mon, 2005-07-11 at 15:51 +0100, Simon Riggs wrote:
 On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:
  On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:
   The ANALYZE commands hold read locks on the tables you wish to write to.
   If you slow them down, you merely slow down your write transactions
   also, and then the read transactions that wait behind them. Every time
   the ANALYZE sleeps it wakes up the other transactions, which then
   realise they can't move because of locks and then wake up the ANALYZEs
   for another shot. The end result is that you introduce more context-
   switching, without any chance of doing more useful work while the
   ANALYZEs sleep.
  
  Let me make sure I understand.  ANALYZE acquires a read
  lock on the table, that it holds until the operation is
  complete (including any sleeps).  That read lock blocks
  the extension of that table via COPY.  Is that right?
  
  According to the 8.0 docs, ANALYZE acquires an ACCESS SHARE
  lock on the table, and that conflicts only with ACCESS
  EXCLUSIVE.  Thats why I didn't think I had a lock issue,
  since I think COPY only needs ROW EXCLUSIVE.  Or perhaps
  the transaction needs something more?
 
 The docs are correct, but don't show catalog and buffer locks.
 
 ...but on further reading of the code there are no catalog locks or
 buffer locks held across the sleep points. So, my explanation doesn't
 work as an explanation for the sleep/no sleep difference you have
 observed.

I've been through all the code now and can't find any resource that is
held across a delay point. Nor any reason to believe that the vacuum
cost accounting would slow anything down.

Since vacuum_cost_delay is a userset parameter, you should be able to
SET this solely for the analyze_thread. That way we will know with more
certainty that it is the analyze_thread that is interfering.

What is your default_statistics_target?
Do you have other stats targets set?

How long does ANALYZE take to run, with/without the vacuum_cost_delay?

Thanks,

Best Regards, Simon Riggs



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] cost-based vacuum

2005-07-12 Thread Ian Westmacott
On Tue, 2005-07-12 at 03:45, Simon Riggs wrote:
 Since vacuum_cost_delay is a userset parameter, you should be able to
 SET this solely for the analyze_thread. That way we will know with more
 certainty that it is the analyze_thread that is interfering.

That is what I have been doing.  In fact, I have eliminated
the reader_thread and analyze_thread.  I just have the
writer_thread running, and a psql connection with which I
perform ANALYZE, for various vacuum_cost_* parameters.
(I'm trying to extract a reproducible experiment)

It appears not to matter whether it is one of the tables
being written to that is ANALYZEd.  I can ANALYZE an old,
quiescent table, or a system table and see this effect.

 What is your default_statistics_target?

All other configs are default; default_statistics_target=10.

 Do you have other stats targets set?

No.  The only thing slightly out of the ordinary with the
tables is that they are created WITHOUT OIDS.  Some indexes,
but no primary keys.  All columns NOT NULL.

 How long does ANALYZE take to run, with/without the vacuum_cost_delay?

Well, on one table with about 50K rows, it takes about 1/4s
to ANALYZE with vacuum_cost_delay=0, and about 15s with
vacuum_cost_delay=1000.

Other things of note:

- VACUUM has the same effect.  If I VACUUM or ANALYZE the
  whole DB, the CPU spikes reset between tables.
- vmstat reports blocks written drops as the CPU rises.
  Don't know if it is cause or effect yet.  On a small test
  system, I'm writing about 1.5MB/s.  After about 20s
  of cost-based ANALYZE, this drops under 0.5MB/s.
- this is a dual Xeon.  I have tried both with and without
  hyperthreading.  I haven't tried to reproduce it
  elsewhere yet, but will.
- Looking at oprofile reports for 10-minute runs of a
  database-wide VACUUM with vacuum_cost_delay=0 and 1000,
  shows the latter spending a lot of time in LWLockAcquire
  and LWLockRelease (20% each vs. 2%).


Thanks,

--Ian



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] cost-based vacuum

2005-07-11 Thread Simon Riggs
On Fri, 2005-07-08 at 12:25 -0400, Ian Westmacott wrote:
 I am beginning to look at Postgres 8, and am particularly
 interested in cost-based vacuum/analyze.  I'm hoping someone
 can shed some light on the behavior I am seeing.
 
 Suppose there are three threads:
 
 writer_thread
   every 1/15 second do
 BEGIN TRANSACTION
   COPY table1 FROM stdin
   ...
   COPY tableN FROM stdin
   perform several UPDATEs, DELETEs and INSERTs
 COMMIT
 
 reader_thread
   every 1/15 second do
 BEGIN TRANSACTION
   SELECT FROM table1 ...
   ...
   SELECT FROM tableN ...
 COMMIT
 
 analyze_thread
   every 5 minutes do
 ANALYZE table1
 ...
 ANALYZE tableN
 
 
 Now, Postgres 8.0.3 out-of-the-box (all default configs) on a
 particular piece of hardware runs the Postgres connection for
 writer_thread at about 15% CPU (meaningless, I know, but for
 comparison) and runs the Postgres connection for reader_thread
 at about 30% CPU.  Latency for reader_thread seeing updates
 from writer_thread is well under 1/15s.  Impact of
 analyze_thread is negligible.
 
 If I make the single configuration change of setting
 vacuum_cost_delay=1000, each iteration in analyze_thread takes
 much longer, of course.  But what I also see is that the CPU
 usage of the connections for writer_thread and reader_thread
 spike up to well over 80% each (this is a dualie) and latency
 drops to 8-10s, during the ANALYZEs.
 
 I don't understand why this would be.  I don't think there
 are any lock issues, and I don't see any obvious I/O issues.
 Am I missing something?  Is there any way to get some
 insight into what those connections are doing?

The ANALYZE commands hold read locks on the tables you wish to write to.
If you slow them down, you merely slow down your write transactions
also, and then the read transactions that wait behind them. Every time
the ANALYZE sleeps it wakes up the other transactions, which then
realise they can't move because of locks and then wake up the ANALYZEs
for another shot. The end result is that you introduce more context-
switching, without any chance of doing more useful work while the
ANALYZEs sleep.

Don't use the vacuum_cost_delay in this situation. You might try setting
it to 0 for the analyze_thread only.

Sounds like you could speed things up by splitting everything into two
sets of tables, with writer_thread1 and writer_thread2 etc. That way
your 2 CPUs would be able to independently be able to get through more
work without locking each other out.

Best Regards, Simon Riggs


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] cost-based vacuum

2005-07-11 Thread Simon Riggs
On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:
 On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:
  The ANALYZE commands hold read locks on the tables you wish to write to.
  If you slow them down, you merely slow down your write transactions
  also, and then the read transactions that wait behind them. Every time
  the ANALYZE sleeps it wakes up the other transactions, which then
  realise they can't move because of locks and then wake up the ANALYZEs
  for another shot. The end result is that you introduce more context-
  switching, without any chance of doing more useful work while the
  ANALYZEs sleep.
 
 Let me make sure I understand.  ANALYZE acquires a read
 lock on the table, that it holds until the operation is
 complete (including any sleeps).  That read lock blocks
 the extension of that table via COPY.  Is that right?
 
 According to the 8.0 docs, ANALYZE acquires an ACCESS SHARE
 lock on the table, and that conflicts only with ACCESS
 EXCLUSIVE.  Thats why I didn't think I had a lock issue,
 since I think COPY only needs ROW EXCLUSIVE.  Or perhaps
 the transaction needs something more?

The docs are correct, but don't show catalog and buffer locks.

...but on further reading of the code there are no catalog locks or
buffer locks held across the sleep points. So, my explanation doesn't
work as an explanation for the sleep/no sleep difference you have
observed.

Best Regards, Simon Riggs





---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] cost-based vacuum

2005-07-08 Thread Ian Westmacott
I am beginning to look at Postgres 8, and am particularly
interested in cost-based vacuum/analyze.  I'm hoping someone
can shed some light on the behavior I am seeing.

Suppose there are three threads:

writer_thread
  every 1/15 second do
BEGIN TRANSACTION
  COPY table1 FROM stdin
  ...
  COPY tableN FROM stdin
  perform several UPDATEs, DELETEs and INSERTs
COMMIT

reader_thread
  every 1/15 second do
BEGIN TRANSACTION
  SELECT FROM table1 ...
  ...
  SELECT FROM tableN ...
COMMIT

analyze_thread
  every 5 minutes do
ANALYZE table1
...
ANALYZE tableN


Now, Postgres 8.0.3 out-of-the-box (all default configs) on a
particular piece of hardware runs the Postgres connection for
writer_thread at about 15% CPU (meaningless, I know, but for
comparison) and runs the Postgres connection for reader_thread
at about 30% CPU.  Latency for reader_thread seeing updates
from writer_thread is well under 1/15s.  Impact of
analyze_thread is negligible.

If I make the single configuration change of setting
vacuum_cost_delay=1000, each iteration in analyze_thread takes
much longer, of course.  But what I also see is that the CPU
usage of the connections for writer_thread and reader_thread
spike up to well over 80% each (this is a dualie) and latency
drops to 8-10s, during the ANALYZEs.

I don't understand why this would be.  I don't think there
are any lock issues, and I don't see any obvious I/O issues.
Am I missing something?  Is there any way to get some
insight into what those connections are doing?

Thanks,

--Ian



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] cost-based vacuum

2005-07-08 Thread Tom Lane
Ian Westmacott [EMAIL PROTECTED] writes:
 If I make the single configuration change of setting
 vacuum_cost_delay=1000, each iteration in analyze_thread takes
 much longer, of course.  But what I also see is that the CPU
 usage of the connections for writer_thread and reader_thread
 spike up to well over 80% each (this is a dualie) and latency
 drops to 8-10s, during the ANALYZEs.

[ scratches head... ]  That doesn't make any sense at all.

 I don't understand why this would be.  I don't think there
 are any lock issues, and I don't see any obvious I/O issues.
 Am I missing something?  Is there any way to get some
 insight into what those connections are doing?

Profiling maybe?  Can you put together a self-contained test case
that replicates this behavior, so other people could look?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly