Hi Andrew, 
Thank you for your prompt reply.
Are you using some special XFS options ? 
I mean special values for logbuffers bufferiosize , extent  size preallocations 
etc ?
I will have only 6 big tables and about 20 other relatively small (fact 
aggregation) tables (~ 10-20 GB each). 
I believe it should be a a good idea to use as much contigious chunks of space 
(from  OS point of view) as possible in
order to make full table scans  as fast as possible. 


Best Regards,
Milen Kulev

-----Original Message-----
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 02, 2006 12:47 AM
To: Milen Kulev
Cc: Pgsql-Performance ((E-mail))
Subject: Re: [PERFORM] XFS filessystem for Datawarehousing



On Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:
> Is anyone using XFS for storing/retrieving relatively large amount
> of data  (~ 200GB)?


Yes, we've been using it on Linux since v2.4 (currently v2.6) and it  
has been rock solid on our database servers (Opterons, running in  
both 32-bit and 64-bit mode).  Our databases are not quite 200GB  
(maybe 75GB for a big one currently), but ballpark enough that the  
experience is probably valid.  We also have a few terabyte+ non- 
database XFS file servers too.

Performance has been very good even with nearly full file systems,  
and reliability has been perfect so far. Some of those file systems  
get used pretty hard for months or years non-stop.  Comparatively, I  
can only tell you that XFS tends to be significantly faster than  
Ext3, but we never did any serious file system tuning either.

Knowing nothing else, my experience would suggest that XFS is a fine  
and safe choice for your application.


J. Andrew Rogers


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to