D. Lance Robinson wrote:
> 
> Try bumping your chunk-size up. I usually use 64. When this number is low,
> you cause more scsi requests to be performed than needed. If really big (
> >=256 ) RAID 0 won't help much.
> 
What if the chunk size matches ext2fs's group size (i.e. 8M)? This would
give very good read/write performance with moderatly large files (i.e.
<8M) if multiple processes do access the fs, because ext2fs usually
tries to store a file completely within one block group. The performance
gain would be n-fold, if n was the number of disks in the raid0 array
and the number of processes was higher than that.
It would give only single-speed (so to speak) for any given application,
though.
But then: Wouldn't linear append be essentially the same, given that
ext2fs spreads files all across the block groups from the beginning?

Would that not be the perfect setup for a web server's documents volume,
with MinServers==n? The files are usually small and there are usually
much more than n servers running simultaneously.

Is this analysis correct or does it contain flaws?
What be the difference between raid0 with 8M chunks and linear append?

Just my thoughts wandering off...

Marc

Reply via email to