reply was meant for the list

---------- Forwarded message ----------
From: felix <crucialfe...@gmail.com>
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith <g...@2ndquadrant.com>




On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith <g...@2ndquadrant.com> wrote:

> PostgreSQL version?  If you're running on 8.3 or earlier, I would be
> suspicous that your Free Space Map has been overrun.
>

8.3



>
> What you are seeing is that the table itself is much larger on disk than
> it's supposed to be.
>

which part of the explain told you that ?

> shaun thomas

SELECT relpages*8/1024 FROM pg_class
 WHERE relname='fastadder_fastadderstatus';

458MB

way too big. build_cache is text between 500-1k chars




>  That can be caused by frequent UPDATEs if you don't have vacuum cleanup
> working effectively, you'll get lots of dead sections left behind from
> UPDATEs in the middle.
>

ok, I just vacuumed it (did this manually a few times as well). and auto is
on.

still:
32840.000ms
and still 458MB



> The best way to fix all this is to run CLUSTER on the table.
>

http://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html

now that would order the data on disk by id (primary key)
the usage of the table is either by a query or by position_in_queue which is
rewritten often (I might change this part of the app and pull it out of this
table)

is this definitely the best way to fix this ?

thanks for your help !


That will introduce a bit of downtime while it holds a lock on the table
> (only a few minutes based on what you've shown here), but the copy you'll
> have afterwards won't be spread all over disk anymore.
>
> --
> Greg Smith   2ndQuadrant US    g...@2ndquadrant.com   Baltimore, MD
> PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us
> "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
>
>

Reply via email to