On Thu, 28 Jun 2007, Peter Rabbitson wrote:

Justin Piszcz wrote:

On Thu, 28 Jun 2007, Peter Rabbitson wrote:

Interesting, I came up with the same results (1M chunk being superior) with a completely different raid set with XFS on top:

...

Could it be attributed to XFS itself?

Peter


Good question, by the way how much cache do the drives have that you are testing with?


I believe 8MB, but I am not sure I am looking at the right number:

[EMAIL PROTECTED]:~# hdparm -i /dev/sda

/dev/sda:

Model=aMtxro7 2Y050M , FwRev=AY5RH10W, SerialNo=6YB6Z7E4
Config={ Fixed }
RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4
BuffType=DualPortCache, BuffSize=7936kB, MaxMultSect=16, MultSect=?0?
CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=268435455
IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120}
PIO modes:  pio0 pio1 pio2 pio3 pio4
DMA modes:  mdma0 mdma1 mdma2
UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5
AdvancedPM=yes: disabled (255) WriteCache=enabled
Drive conforms to: ATA/ATAPI-7 T13 1532D revision 0: ATA/ATAPI-1 ATA/ATAPI-2 ATA/ATAPI-3 ATA/ATAPI-4 ATA/ATAPI-5 ATA/ATAPI-6 ATA/ATAPI-7

* signifies the current active mode

[EMAIL PROTECTED]:~#

1M chunk consistently delivered best performance with:

o A plain dumb dd run
o bonnie
o two bonnie threads
o iozone with 4 threads

My RA is set at 256 for the drives and 16384 for the array (128k and 8M respectively)


8MB yup: BuffSize=7936kB.

My read ahead is set to 64 megabytes and 16384 for the stripe_size_cache.

Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to