Re: [PERFORM] good old VACUUM FULL

2011-03-23 Thread Scott Marlowe
On Tue, Mar 22, 2011 at 6:52 PM, felix crucialfe...@gmail.com wrote:
 I posted many weeks ago about a severe problem with a table that was
 obviously bloated and was stunningly slow. Up to 70 seconds just to get a
 row count on 300k rows.
 I removed the text column, so it really was just a few columns of fixed
 data.
 Still very bloated.  Table size was 450M
 The advice I was given was to do CLUSTER, but this did not reduce the table
 size in the least.

Then either cluster failed (did you get an error message) or the table
was not bloated.  Given that it looks like it was greatly reduced in
size by the vacuum full, I'd guess cluster failed for some reason.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] good old VACUUM FULL

2011-03-23 Thread Shaun Thomas

On 03/23/2011 01:16 AM, Scott Marlowe wrote:


Then either cluster failed (did you get an error message) or the table
was not bloated.  Given that it looks like it was greatly reduced in
size by the vacuum full, I'd guess cluster failed for some reason.


Or it just bloated again. Remember, he still hasn't changed his 
max_fsm_pages setting, and that table apparently experiences *very* high 
turnover.


A 25x bloat factor isn't unheard of for such a table. We have one that 
needs to have autovacuum or be manually vacuumed frequently because it 
experiences several thousand update/deletes per minute. The daily 
turnover of that particular table is around 110x. If our fsm settings 
were too low, or we didn't vacuum regularly, I could easily see that 
table quickly becoming unmanageable. I fear for his django_session table 
for similar reasons.


Felix, I know you don't want to experiment with kernel parameters, but 
you *need* to increase your max_fsm_pages setting.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] good old VACUUM FULL

2011-03-22 Thread felix
I posted many weeks ago about a severe problem with a table that was
obviously bloated and was stunningly slow. Up to 70 seconds just to get a
row count on 300k rows.

I removed the text column, so it really was just a few columns of fixed
data.
Still very bloated.  Table size was 450M

The advice I was given was to do CLUSTER, but this did not reduce the table
size in the least.
Nor performance.

Also to resize my free space map (which still does need to be done).
Since that involves tweaking the kernel settings, taking the site down and
rebooting postgres and exposing the system to all kinds of risks and
unknowns and expensive experimentations I was unable to do it and have had
to hobble along with a slow table in my backend holding up jobs.

Much swearing that nobody should ever do VACUUM FULL.  Manual advises
against it.  Only crazy people do that.

Finally I decide to stop taking advice.

ns= explain analyze select count(*) from fastadder_fastadderstatus;
-
 Aggregate  (cost=62602.08..62602.09 rows=1 width=0) (actual
time=25320.000..25320.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61815.86
rows=314486 width=0) (actual time=180.000..25140.000 rows=314493 loops=1)
 Total runtime: *25320.000* ms

ns= vacuum full fastadder_fastadderstatus;

took about 20 minutes

ns= explain analyze select count(*) from fastadder_fastadderstatus;

 Aggregate  (cost=7478.03..7478.04 rows=1 width=0) (actual
time=940.000..940.000 rows=1 loops=1)
   -  Seq Scan on fastadder_fastadderstatus  (cost=0.00..6691.82
rows=314482 width=0) (actual time=0.000..530.000 rows=314493 loops=1)
 Total runtime: *940.000 ms*

moral of the story:  if your table is really bloated, just do VACUUM FULL

CLUSTER will not reduce table bloat in and identical fashion


Re: [PERFORM] good old VACUUM FULL

2011-03-22 Thread Chris

On 23/03/11 11:52, felix wrote:

I posted many weeks ago about a severe problem with a table that was
obviously bloated and was stunningly slow. Up to 70 seconds just to get
a row count on 300k rows.

I removed the text column, so it really was just a few columns of fixed
data.
Still very bloated.  Table size was 450M

The advice I was given was to do CLUSTER, but this did not reduce the
table size in the least.
Nor performance.

Also to resize my free space map (which still does need to be done).
Since that involves tweaking the kernel settings, taking the site down
and rebooting postgres and exposing the system to all kinds of risks and
unknowns and expensive experimentations I was unable to do it and have
had to hobble along with a slow table in my backend holding up jobs.

Much swearing that nobody should ever do VACUUM FULL.  Manual advises
against it.  Only crazy people do that.


snip


moral of the story:  if your table is really bloated, just do VACUUM FULL


You'll need to reindex that table now - vacuum full can bloat your 
indexes which will affect your other queries.


reindex table fastadder_fastadderstatus;

--
Postgresql  php tutorials
http://www.designmagick.com/


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance