Well there are a few different factors there.
1) since the release of SL6 LVM wil read and auto tune those paramiters if
it can read those details off your raid controler or san.

2) due to other bottle necks in ext4 around the journal it probably
wouldn't make as big a difference as it would on XFS

3) the cfq io sceduler messes with this and from my tests its great for
desktop but isn't good for hardware raid controlers I find noop works
better and that goes double for VMs.

4) the reason why this helps is it reduces the work your raid controlers
microcontroler has to do, but if you have sufficient cache and the array
isn't big enough to put a strain on it then it wouldn't apear to do any
thing at all unless you could monitor the raid controlers internal cpu
utilization
On Nov 15, 2012 4:22 PM, "Ken Teh" <[email protected]> wrote:

> Well, I tried 3 scenarios with the stride/stripe-width settings:
>
> (1) None, mkfs.ext4 with defaults.
>
> (2) Using the -E option to set the stride/stripe-width to match the disk
> array
>     configuration.
>
> (3) Using LVM with defaults.
>
> There was no difference in writing to the disk.
>
>
>
> On 11/06/2012 11:21 AM, Ken Teh wrote:
>
>> I'm wondering if anyone has tried using stride and stripe-width options
>> when
>> creating an ext4 filesystem on a hardware RAID5 array.
>>
>> Does it improve the performance of the array?
>>
>> Should you create the filesystem directly on the partition?  What happens
>> if
>> you create it on a logical volume that is created on the physical
>> partition?
>>
>> The LVM volume was created without options, ie, defaults.  I know LVM
>> supports
>> stride and stripes as well but I dont know how they map to a physical RAID
>> device so I've never bothered with it.
>>
>

Reply via email to