Re: RAID5 to RAID6 reshape?

2008-02-24 Thread Peter Grandi
>>> On Sat, 23 Feb 2008 21:40:08 +0100, Nagilum >>> <[EMAIL PROTECTED]> said: [ ... ] >> * Doing unaligned writes on a 13+1 or 12+2 is catastrophically >> slow because of the RMW cycle. This is of course independent >> of how one got to the something like 13+1 or a 12+2. nagilum> Changing a sing

Re: RAID5 to RAID6 reshape?

2008-02-22 Thread Peter Grandi
[ ... ] >> * Suppose you have a 2+1 array which is full. Now you add a >> disk and that means that almost all free space is on a single >> disk. The MD subsystem has two options as to where to add >> that lump of space, consider why neither is very pleasant. > No, only one, at the end of the md d

Re: LVM performance (was: Re: RAID5 to RAID6 reshape?)

2008-02-21 Thread Peter Grandi
>> This might be related to raid chunk positioning with respect >> to LVM chunk positioning. If they interfere there indeed may >> be some performance drop. Best to make sure that those chunks >> are aligned together. > Interesting. I'm seeing a 20% performance drop too, with default > RAID and LV

Re: How many drives are bad?

2008-02-21 Thread Peter Grandi
>>> On Tue, 19 Feb 2008 14:25:28 -0500, "Norman Elton" >>> <[EMAIL PROTECTED]> said: [ ... ] normelton> The box presents 48 drives, split across 6 SATA normelton> controllers. So disks sda-sdh are on one controller, normelton> etc. In our configuration, I run a RAID5 MD array for normelton> each

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-02-19 Thread Peter Grandi
>> What sort of tools are you using to get these benchmarks, and can I >> used them for ext3? The only simple tools that I found that gives semi-reasonable numbers avoiding most of the many pitfalls of storage speed testing (almost all storage benchmarks I see are largely meaningless) are recent v

Re: RAID5 to RAID6 reshape?

2008-02-18 Thread Peter Grandi
>>> On Sun, 17 Feb 2008 07:45:26 -0700, "Conway S. Smith" >>> <[EMAIL PROTECTED]> said: [ ... ] beolach> Which part isn't wise? Starting w/ a few drives w/ the beolach> intention of growing; or ending w/ a large array (IOW, beolach> are 14 drives more than I should put in 1 array & expect beolach

Re: RAID5 to RAID6 reshape?

2008-02-17 Thread Peter Grandi
>>> On Sat, 16 Feb 2008 20:58:07 -0700, Beolach >>> <[EMAIL PROTECTED]> said: beolach> [ ... ] start w/ 3 drives in RAID5, and add drives as I beolach> run low on free space, eventually to a total of 14 beolach> drives (the max the case can fit). Like for for so many other posts to this list, all

Re: striping of a 4 drive raid10

2008-01-27 Thread Peter Grandi
>>> On Sun, 27 Jan 2008 20:33:45 +0100, Keld Jørn Simonsen >>> <[EMAIL PROTECTED]> said: keld> Hi I have tried to make a striping raid out of my new 4 x keld> 1 TB SATA-2 disks. I tried raid10,f2 in several ways: keld> 1: md0 = raid10,f2 of sda1+sdb1, md1= raid10,f2 of sdc1+sdd1, md2 = raid0 keld

Re: New XFS benchmarks using David Chinner's recommendations for XFS-based optimizations.

2007-12-31 Thread Peter Grandi
>> Why does mdadm still use 64k for the default chunk size? > Probably because this is the best balance for average file > sizes, which are smaller than you seem to be testing with? Well "average file sizes" relate less to chunk sizes than access patterns do. Single threaded sequential reads with

Re: raid10 performance question

2007-12-25 Thread Peter Grandi
>>> On Tue, 25 Dec 2007 19:08:15 +, >>> [EMAIL PROTECTED] (Peter Grandi) said: [ ... ] >> It's the raid10,f2 *read* performance in degraded mode that is >> strange - I get almost exactly 50% of the non-degraded mode >> read performance. Why is that

Re: raid10 performance question

2007-12-25 Thread Peter Grandi
>>> On Sun, 23 Dec 2007 08:26:55 -0600, "Jon Nelson" >>> <[EMAIL PROTECTED]> said: > I've found in some tests that raid10,f2 gives me the best I/O > of any raid5 or raid10 format. Mostly, depending on type of workload. Anyhow in general most forms of RAID10 are cool, and handle disk losses better

Re: raid6 check/repair

2007-12-04 Thread Peter Grandi
[ ... on RAID1, ... RAID6 error recovery ... ] tn> The use case for the proposed 'repair' would be occasional, tn> low-frequency corruption, for which many sources can be tn> imagined: tn> Any piece of hardware has a certain failure rate, which may tn> depend on things like age, temperature, stab

Re: md RAID 10 on Linux 2.6.20?

2007-11-24 Thread Peter Grandi
>>> On Thu, 22 Nov 2007 22:09:27 -0500, [EMAIL PROTECTED] >>> said: > [ ... ] a RAID 10 "personality" defined in md that can be > implemented using mdadm. If so, is it available in 2.6.20.11, > [ ... ] Very good choice about 'raid10' in general. For a single layer just use '-l raid10'. Run 'man m

Re: slow raid5 performance

2007-10-22 Thread Peter Grandi
>>> On Mon, 22 Oct 2007 15:33:09 -0400 (EDT), Justin Piszcz >>> <[EMAIL PROTECTED]> said: [ ... speed difference between PCI and PCIe RAID HAs ... ] >> I recently built a 3 drive RAID5 using the onboard SATA >> controllers on an MCP55 based board and get around 115MB/s >> write and 141MB/s read.

Re: slow raid5 performance

2007-10-22 Thread Peter Grandi
>>> On Mon, 22 Oct 2007 10:18:59 -0700 (PDT), Peter >>> <[EMAIL PROTECTED]> said: [ ... ] thenephilim13> I can understand that if a RMW happens it will thenephilim13> effectively lower the write throughput thenephilim13> substantially but I'm not sure entirely sure why thenephilim13> this would h

Re: slow raid5 performance

2007-10-20 Thread Peter Grandi
>>> On Thu, 18 Oct 2007 16:45:20 -0700 (PDT), nefilim >>> <[EMAIL PROTECTED]> said: [ ... ] > 3 x 500GB WD RE2 hard drives > AMD Athlon XP 2400 (2.0Ghz), 1GB RAM [ ... ] > avg-cpu: %user %nice %system %iowait %steal %idle >1.010.00 55.56 40.400.003.03 [ ... ] > w