Sorry, forgot to ask:
What is the recommended/best  PG block size for DWH  database?  16k, 32k, 64k ?
What hsould be the relation  between XFS/RAID stripe size and PG block size ?

Best  Regards. 
Milen Kulev
 

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Milen Kulev
Sent: Tuesday, August 01, 2006 11:50 PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] XFS filessystem for Datawarehousing


I intend to test  Postgres/Bizgres for DWH use. I want to use XFS filesystem to 
get the best possible performance at FS
level(correct me if I am wrong !).

Is anyone using XFS for storing/retrieving relatively large amount of data  (~ 
200GB)?

If yes, what about the performance and stability of  XFS.
I am especially interested in recommendations about XFS mount options and 
mkfs.xfs options. My setup will be roughly
this:
1) 4  SCSI HDD , 128GB each, 
2) RAID 0 on the four SCSI HDD disks using LVM (software RAID)

There are two other SATA HDD in the server.  Server has 2 physical CPUs (XEON 
at 3 GHz),  4 Logical CPUs, 8 GB RAM,  OS
= SLES9 SP3 

My questions:
1) Should I place external XFS journal on separate device ?
2) What  should be the journal buffer size (logbsize) ?
3)  How many journal buffers (logbufs) should I configure ?
4) How many allocations groups  (for mkfs.xfs) should I  configure
5)  Is it wortj settion noatime ?
6) What I/O scheduler(elevators) should I use (massive sequencial reads)
7) What is the ideal stripe unit and width (for a RAID device) ? 

I will appreciate any options, suggestions, pointers.

Best  Regards.
Milen Kulev


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to