On Fri, Feb 22, 2013 at 3:41 PM, James R Skaggs
james.r.ska...@seagate.comwrote:
Okay, I have some more info.
Some background info. This one table gets so many changes, I CLUSTER it
each night. However, after I do this. The statistics still appear to be
incorrect. Even after I do a select
BTW jimbob and James.R.Skaggs are the same person. I just didn't want to
use my work email for this
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/BUG-7853-Incorrect-statistics-in-table-with-many-dead-rows-tp5743845p5747000.html
Sent from the PostgreSQL - bugs
So, I have some observations. Is this what you are seeing as well?
So when we CLUSTER a table heavily-updated table:
CLUSTER does appear to reset *n_dead_tup*, *n_tup_ins*, *n_tup_del*,
*n_tup_hot_upd*, but NOT *n_live_tup*
pg_stat_reset() truly clears out all the statistics counters. I
Okay, I have some more info.
Some background info. This one table gets so many changes, I CLUSTER it
each night. However, after I do this. The statistics still appear to be
incorrect. Even after I do a select pg_stat_reset(); Followed by 3
ANALYZE at default_statistics_target as 1, 10, and 100
On Sun, Feb 10, 2013 at 12:10 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Tue, Feb 5, 2013 at 2:00 PM, Kevin Grittner kgri...@ymail.com wrote:
OK, the estimate was 13 million and there were actually 13.8
million, but it is a random sample used to generate estimates.
That seems worse
On Tue, Feb 5, 2013 at 2:00 PM, Kevin Grittner kgri...@ymail.com wrote:
jim...@seagate.com jim...@seagate.com wrote:
INFO: analyzing public.stream_file
INFO: stream_file: scanned 3 of 2123642 pages, containing
184517 live rows and 2115512 dead rows; 3 rows in sample,
158702435
Kevin -
Here is the plan *after *we clustered on the PK index..it was probably that
way before the bloat, but I don't know.
Aggregate (cost=514557.38..514557.40 rows=1 width=12) (actual
time=8823.973..8823.974 rows=1 loops=1)
- HashAggregate (cost=471029.03..492793.20 rows=1243667 width=53)
The following bug has been logged on the website:
Bug reference: 7853
Logged by: James Skaggs
Email address: jim...@seagate.com
PostgreSQL version: 8.4.14
Operating system: RHEL6
Description:
After analyze verbose, the table shows 158 million rows. A select count(1)
jim...@seagate.com jim...@seagate.com wrote:
INFO: analyzing public.stream_file
INFO: stream_file: scanned 3 of 2123642 pages, containing
184517 live rows and 2115512 dead rows; 3 rows in sample,
158702435 estimated total rows
184517 live rows in 3 randomly sampled pages out of
Kevin -
Agreed, we shouldn't have so many dead rows. Our autovacuum is set on but
has default parameters. We are clustering today. This, of course, means
downtime and inconvenience to the users.
Here is the troublesome query:
select
sum(case when t1.cdlc 0 then 0 else t1.cdlc end) as
10 matches
Mail list logo