Christoph Hellwig <[email protected]> writes:

> Hi Jeff,
>
> thanks for the detailed numbers!
>
> The bigger I/O size makes a drastic impact for Linux software RAID
> setups, for which this was a driver.  For the RAID5/6 over SATA disks
> setups that I was benchmarking this it gives between 20 and 40% better
> sequential read and write numbers.

Hi, Christoph,

Unfortunately, I'm unable to reproduce your results (though my test
setup uses SAS disks, not SATA).  I tried with a 10 data disk md RAID5,
with 32k and 128k chunk sizes.  I modified the fio program to read/write
multiples of the stripe width, and I also used aio-stress over a range
of queue depth and I/O sizes for read, write, random read and random
write.  I didn't see any measurable performance difference.  Do you
still have access to your test setup?

What do you think about reinstituting the artificial max_sectors_kb cap,
but bumping the default up from 512KB to 1280KB?  I had our performance
team run numbers on their test setup (which is a 12 disk raid0 with 32k
chunk size, fwiw) with max_sectors_kb set to 1280 and, aside from one
odd data point, things looked good.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to