On 10/12/10 10:44 AM, Scott Carey wrote:
On Oct 12, 2010, at 8:39 AM, Dan Harris wrote:

  On 10/11/10 8:02 PM, Scott Carey wrote:
would give you a 1MB read-ahead.  Also, consider XFS and its built-in 
defragmentation.  I have found that a longer lived postgres DB will get extreme
file fragmentation over time and sequential scans end up mostly random.  
On-line file defrag helps tremendously.

We just had a corrupt table caused by an XFS online defrag.  I'm scared
to use this again while the db is live.  Has anyone else found this to
be safe?  But, I can vouch for the fragmentation issue, it happens very
quickly in our system.

What version?  I'm using the latest CentoOS extras build.

We've been doing online defrag for a while now on a very busy database with>  
8TB of data.  Not that that means there are no bugs...

It is a relatively simple thing in xfs -- it writes a new file to temp in a way 
that allocates contiguous space if available, then if the file has not been 
modified since it was re-written it is essentially moved on top of the other 
one.  This should be safe provided the journaling and storage is safe, etc.

I'm not sure how to figure out what version of XFS we're on.. but it's Linux kernel 2.6.24-24 x86_64 on Ubuntu Server 8.04.3. Postgres version 8.3

We're due for an upgrade on that server soon so we'll do some more testing once we upgrade. Right now we are just living with the fragmentation. I'm glad to hear the regular on-line defrag is working successfully, at least that gives me hope we can rely on it in the future.

-Dan

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to