Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-06-01 Thread Matthew T. O'Connor

Tom Lane wrote:

ITAGAKI Takahiro [EMAIL PROTECTED] writes:

Our documentation says
| analyze threshold = analyze base threshold
|   + analyze scale factor * number of tuples
| is compared to the total number of tuples inserted, updated, or deleted
| since the last ANALYZE. 



but deleted tuples are not considered in the total number, because the delta
of {n_live_tuples + n_dead_tuples} is not changed by DELETE. We add the number
of DELETE into n_live_tuples and subtract it from n_dead_tuples.


Yeah, I was concerned about that when I was making the patch, but didn't
see any simple fix.  A large number of DELETEs (without any inserts or
updates) would trigger a VACUUM but not an ANALYZE, which in the worst
case would be bad because the stats could have shifted.

We could fix this at the cost of carrying another per-table counter in
the stats info, but I'm not sure it's worth it.


I believe that whenever autovacuum performs a VACUUM it actually 
performs a VACUUM ANALYZE at leas the old contrib version did and I 
think Alvaro copied that.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-06-01 Thread Alvaro Herrera
Matthew T. O'Connor wrote:
 Tom Lane wrote:

 Yeah, I was concerned about that when I was making the patch, but didn't
 see any simple fix.  A large number of DELETEs (without any inserts or
 updates) would trigger a VACUUM but not an ANALYZE, which in the worst
 case would be bad because the stats could have shifted.
 
 We could fix this at the cost of carrying another per-table counter in
 the stats info, but I'm not sure it's worth it.
 
 I believe that whenever autovacuum performs a VACUUM it actually 
 performs a VACUUM ANALYZE at leas the old contrib version did and I 
 think Alvaro copied that.

Huh, no, it doesn't --- they are considered separately.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1, W 73º 13' 56.4
La rebeldía es la virtud original del hombre (Arthur Schopenhauer)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread Alvaro Herrera
Tom Lane wrote:

 It may boil down to whether we would like the identity
   n_live_tup = n_tup_ins - n_tup_del
 to continue to hold, or the similar one for n_dead_tup.  The problem
 basically is that pgstats is computing n_live_tup and n_dead_tup
 using those identities rather than by tracking what really happens.

Thanks for fixing this.  For the record, I don't think I ever actually
*considered* the effect of rolled back transactions in the tuple counts;
at the time I wrote the code, I was just mirroring what the old autovac
code did, and I didn't stop to think whether the assumptions were
actually correct.

I think the committed fix was the most appropriate -- changing the
semantics of n_ins_tup etc would defeat the original purpose they were
written for, I think.


Regarding the idea of counting dead tuples left behind by vacuum to
update pgstats at the end, I think the idea of counting them
individually is good, but it doesn't account for dead tuples created in
areas that were scanned earlier.  So I think that Takahiro-san idea of
using the value accumulated in pgstats is better.

If we apply Heikki's idea of advancing OldestXmin, I think what we
should do is grab the value from pgstats when vacuum starts, and each
time we're going to advance OldestXmin, grab the value from pgstats
again; accumulate the differences from the various pgstat grabs.  At the
end we send the accumulated differences as the new dead tuple count.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 If we apply Heikki's idea of advancing OldestXmin, I think what we
 should do is grab the value from pgstats when vacuum starts, and each
 time we're going to advance OldestXmin, grab the value from pgstats
 again; accumulate the differences from the various pgstat grabs.  At the
 end we send the accumulated differences as the new dead tuple count.

Considering that each of those values will be up to half a second old,
I can hardly think that this will accomplish anything except to
introduce a great deal of noise ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread ITAGAKI Takahiro

Alvaro Herrera [EMAIL PROTECTED] wrote:

 Tom Lane wrote:
 
  It may boil down to whether we would like the identity
  n_live_tup = n_tup_ins - n_tup_del
  to continue to hold, or the similar one for n_dead_tup.  The problem
  basically is that pgstats is computing n_live_tup and n_dead_tup
  using those identities rather than by tracking what really happens.

On a relevant note, there is a variance in the calculation of auto-analyze
threshold between documentation and implementation in HEAD.
(Only HEAD; It is ok in 8.2 or before)

Our documentation says
| analyze threshold = analyze base threshold
|   + analyze scale factor * number of tuples
| is compared to the total number of tuples inserted, updated, or deleted
| since the last ANALYZE. 
http://momjian.us/main/writings/pgsql/sgml/routine-vacuuming.html#AUTOVACUUM

but deleted tuples are not considered in the total number, because the delta
of {n_live_tuples + n_dead_tuples} is not changed by DELETE. We add the number
of DELETE into n_live_tuples and subtract it from n_dead_tuples.

| pgstat.c
|   t_new_live_tuples += tuples_inserted - tuples_deleted;
|   t_new_dead_tuples += tuples_deleted;
| autovacuum.c
|   anltuples = n_live_tuples + n_dead_tuples - last_anl_tuples;

There is no delete-only database in the real world, so this is not so serious
problem probably. We'd better to fix the documentation if it is intention.

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread Alvaro Herrera
Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
  If we apply Heikki's idea of advancing OldestXmin, I think what we
  should do is grab the value from pgstats when vacuum starts, and each
  time we're going to advance OldestXmin, grab the value from pgstats
  again; accumulate the differences from the various pgstat grabs.  At the
  end we send the accumulated differences as the new dead tuple count.
 
 Considering that each of those values will be up to half a second old,
 I can hardly think that this will accomplish anything except to
 introduce a great deal of noise ...

Normally, yes, but the values can be older if the vacuum_cost_delay is
large.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread Tom Lane
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
 Our documentation says
 | analyze threshold = analyze base threshold
 |   + analyze scale factor * number of tuples
 | is compared to the total number of tuples inserted, updated, or deleted
 | since the last ANALYZE. 

 but deleted tuples are not considered in the total number, because the delta
 of {n_live_tuples + n_dead_tuples} is not changed by DELETE. We add the number
 of DELETE into n_live_tuples and subtract it from n_dead_tuples.

Yeah, I was concerned about that when I was making the patch, but didn't
see any simple fix.  A large number of DELETEs (without any inserts or
updates) would trigger a VACUUM but not an ANALYZE, which in the worst
case would be bad because the stats could have shifted.

We could fix this at the cost of carrying another per-table counter in
the stats info, but I'm not sure it's worth it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-31 Thread Tom Lane
Alvaro Herrera [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 Alvaro Herrera [EMAIL PROTECTED] writes:
 If we apply Heikki's idea of advancing OldestXmin, I think what we
 should do is grab the value from pgstats when vacuum starts, and each
 time we're going to advance OldestXmin, grab the value from pgstats

 Considering that each of those values will be up to half a second old,
 I can hardly think that this will accomplish anything except to
 introduce a great deal of noise ...

 Normally, yes, but the values can be older if the vacuum_cost_delay is
 large.

I'm not sure we're on the same page.  I meant that whatever you read
from pgstats is going to be stale by an uncertain amount of time.
Taking the deltas of such numbers over relatively short intervals
is going to be mighty noisy.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-26 Thread Heikki Linnakangas

Tom Lane wrote:

I'm kind of leaning to the separate-tally method and abandoning the
assumption that the identities hold.  I'm not wedded to the idea
though.  Any thoughts?


That seems like the best approach to me. Like the scan/fetch counters, 
n_tup_ins and n_tup_del represent work done regardless of 
commit/rollback, but n_live_tup and n_dead_tup represent the current 
state of the table.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] Autovacuum versus rolled-back transactions

2007-05-26 Thread Matthew O'Connor

Tom Lane wrote:

This means that a table could easily be full of dead tuples from failed
transactions, and yet autovacuum won't do a thing because it doesn't
know there are any.  Perhaps this explains some of the reports we've
heard of tables bloating despite having autovac on.


I think this is only a problem for failed inserts as failed updates will 
be accounted for correctly by autovac and as you said, failed deletes 
really do nothing.  So is there a way for rollback to just add the 
number of rolled back inserts to the n_tup_del counter?  Then we would 
be ok, no?



I think it's fairly obvious how n_live_tup and n_dead_tup ought to
change in response to a failed xact, but maybe not so obvious for the
other counters.  I suggest that the scan/fetch counters (seq_scan,
seq_tup_read, idx_scan, idx_tup_fetch) as well as all the block I/O
counters should increment the same for committed and failed xacts,
since they are meant to count work done regardless of whether the work
was in vain.  I am much less sure how we want n_tup_ins, n_tup_upd,
n_tup_del to act though.  Should they be advanced as normal by a
failed xact?  That's what the code is doing now, and if you think they
are counters for work done, it's not so unreasonable.


I think autovac only considers n_tup_(upd|ins|del) so while it might be 
correct to fix those other counters, I don't know that they are must fix 
items.




---(end of broadcast)---
TIP 6: explain analyze is your friend


[HACKERS] Autovacuum versus rolled-back transactions

2007-05-25 Thread Tom Lane
The pgstats subsystem does not correctly account for the effects of
failed transactions.  Note the live/dead tuple counts in this example:

regression=# create table foo (f1 int);
CREATE TABLE
regression=# insert into foo select x from generate_series(1,1000) x;
INSERT 0 1000
-- wait a second for stats to catch up
regression=# select * from pg_stat_all_tables where relname = 'foo';
 relid  | schemaname | relname | seq_scan | seq_tup_read | idx_scan | 
idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup | 
last_vacuum | last_autovacuum | last_analyze | last_autoanalyze 
++-+--+--+--+---+---+---+---+++-+-+--+--
 496849 | public | foo |0 |0 |  |   
|  1000 | 0 | 0 |   1000 |  0 | 
| |  | 
(1 row)

regression=# begin;
BEGIN
regression=# insert into foo select x from generate_series(1,1000) x;
INSERT 0 1000
regression=# rollback;
ROLLBACK
-- wait a second for stats to catch up
regression=# select * from pg_stat_all_tables where relname = 'foo';
 relid  | schemaname | relname | seq_scan | seq_tup_read | idx_scan | 
idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del | n_live_tup | n_dead_tup | 
last_vacuum | last_autovacuum | last_analyze | last_autoanalyze 
++-+--+--+--+---+---+---+---+++-+-+--+--
 496849 | public | foo |0 |0 |  |   
|  2000 | 0 | 0 |   2000 |  0 | 
| |  | 
(1 row)

This means that a table could easily be full of dead tuples from failed
transactions, and yet autovacuum won't do a thing because it doesn't
know there are any.  Perhaps this explains some of the reports we've
heard of tables bloating despite having autovac on.

It seems to me this is a must fix if we expect people to rely on
autovacuum for real in 8.3.

I think it's fairly obvious how n_live_tup and n_dead_tup ought to
change in response to a failed xact, but maybe not so obvious for the
other counters.  I suggest that the scan/fetch counters (seq_scan,
seq_tup_read, idx_scan, idx_tup_fetch) as well as all the block I/O
counters should increment the same for committed and failed xacts,
since they are meant to count work done regardless of whether the work
was in vain.  I am much less sure how we want n_tup_ins, n_tup_upd,
n_tup_del to act though.  Should they be advanced as normal by a
failed xact?  That's what the code is doing now, and if you think they
are counters for work done, it's not so unreasonable.

It may boil down to whether we would like the identity
n_live_tup = n_tup_ins - n_tup_del
to continue to hold, or the similar one for n_dead_tup.  The problem
basically is that pgstats is computing n_live_tup and n_dead_tup
using those identities rather than by tracking what really happens.
I don't think we can have those identities if failed xacts update the
counts normally.  Is it worth having separate counters for the numbers
of failed inserts/updates?  (Failed deletes perhaps need not be counted,
since they change nothing.)  Or we could change the backends so that the
reported n_tup_ins/del/upd are made to still produce the right live/dead
tup counts according to the identities, but then those counts would not
reflect work done.  Another alternative is for transactions to tally
the number of live and dead tuples they create, with understanding of
rollbacks, and send those to the stats collector independently of the
action counters.

I don't think I want to add separate failed-insert/update counters,
because that will bloat the stats reporting file, which is uncomfortably
large already when you have lots of tables.  The separate-tally method
would avoid that, at the price of more stats UDP traffic.

I'm kind of leaning to the separate-tally method and abandoning the
assumption that the identities hold.  I'm not wedded to the idea
though.  Any thoughts?

regards, tom lane

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate