Tom Lane wrote:
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
Our documentation says
| analyze threshold = analyze base threshold
| + analyze scale factor * number of tuples
| is compared to the total number of tuples inserted, updated, or deleted
| since the last ANALYZE.
Matthew T. O'Connor wrote:
Tom Lane wrote:
Yeah, I was concerned about that when I was making the patch, but didn't
see any simple fix. A large number of DELETEs (without any inserts or
updates) would trigger a VACUUM but not an ANALYZE, which in the worst
case would be bad because the
Tom Lane wrote:
It may boil down to whether we would like the identity
n_live_tup = n_tup_ins - n_tup_del
to continue to hold, or the similar one for n_dead_tup. The problem
basically is that pgstats is computing n_live_tup and n_dead_tup
using those identities rather than by tracking
Alvaro Herrera [EMAIL PROTECTED] writes:
If we apply Heikki's idea of advancing OldestXmin, I think what we
should do is grab the value from pgstats when vacuum starts, and each
time we're going to advance OldestXmin, grab the value from pgstats
again; accumulate the differences from the
Alvaro Herrera [EMAIL PROTECTED] wrote:
Tom Lane wrote:
It may boil down to whether we would like the identity
n_live_tup = n_tup_ins - n_tup_del
to continue to hold, or the similar one for n_dead_tup. The problem
basically is that pgstats is computing n_live_tup and n_dead_tup
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
If we apply Heikki's idea of advancing OldestXmin, I think what we
should do is grab the value from pgstats when vacuum starts, and each
time we're going to advance OldestXmin, grab the value from pgstats
again; accumulate the
ITAGAKI Takahiro [EMAIL PROTECTED] writes:
Our documentation says
| analyze threshold = analyze base threshold
| + analyze scale factor * number of tuples
| is compared to the total number of tuples inserted, updated, or deleted
| since the last ANALYZE.
but deleted
Alvaro Herrera [EMAIL PROTECTED] writes:
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
If we apply Heikki's idea of advancing OldestXmin, I think what we
should do is grab the value from pgstats when vacuum starts, and each
time we're going to advance OldestXmin, grab the value
Tom Lane wrote:
I'm kind of leaning to the separate-tally method and abandoning the
assumption that the identities hold. I'm not wedded to the idea
though. Any thoughts?
That seems like the best approach to me. Like the scan/fetch counters,
n_tup_ins and n_tup_del represent work done
Tom Lane wrote:
This means that a table could easily be full of dead tuples from failed
transactions, and yet autovacuum won't do a thing because it doesn't
know there are any. Perhaps this explains some of the reports we've
heard of tables bloating despite having autovac on.
I think this is
The pgstats subsystem does not correctly account for the effects of
failed transactions. Note the live/dead tuple counts in this example:
regression=# create table foo (f1 int);
CREATE TABLE
regression=# insert into foo select x from generate_series(1,1000) x;
INSERT 0 1000
-- wait a second for
11 matches
Mail list logo