iostat messed up with md on 2.6.16.x

2006-05-23 Thread Pallai Roland
Hi, I upgraded my kernel from 2.6.15.6 to 2.6.16.16 and now the 'iostat -x 1' permanently shows 100% utilisation on each disk that member of an md array. I asked my friend who using 3 boxes with 2.6.16.2 2.6.16.9 2.6.16.11 and raid1, he's reported the same too. it works for anyone? I don't think

Re: raid5 disaster

2006-05-23 Thread Mike Hardy
Bruno Seoane wrote: > mdadm -C -l5 -n5 > -c=128 /dev/md0 /dev/sdb1 /dev/sdd1 /dev/sde1 /dev/sdc1 /dev/sda1 > > I took the devices order from the mdadm output on a working device. Is this > the way it's supposed to be the command assembled? > > Is there anything alse I should consider or any o

[RFC][PATCH] md: Move stripe operations outside the spinlock (v2)

2006-05-23 Thread Dan Williams
The following is a revision of the patch with the suggested changes. -Eliminate the wait_for_block_ops queue -Simplify the code by tracking the operations at the stripe level not the block level -Integrate the work struct into stripe_head (remove the need for memory allocation) -Make the work queue

raid5 disaster

2006-05-23 Thread Bruno Seoane
Hi, I had a working raid5 setup with 5 SATA disks, 3 attached to a Promise TX4 and 2 more attached to the mainboard controller. It has been working flawlessly for a long time, but I had to add a sat card to the machine so I also upgraded to 2.6.16.16 I don't know if there was some problem with

Re: Does software RAID take advantage of SMP, or 64 bit CPU(s)?

2006-05-23 Thread Adam Talbot
-Neil I was not looking for any direct advantage. It is more a money VS performance thing. I have a old dual proc Opteron motherboard. I am going with 64-bit, but it is much cheaper if I just go buy a nice single proc board instead of buying two Opterons for my dual proc board. If I could get a

Re: improving raid 5 performance

2006-05-23 Thread Neil Brown
On Tuesday May 23, [EMAIL PROTECTED] wrote: > Neil hello. > > 1. > i have applied the common path according to > http://www.spinics.net/lists/raid/msg11838.html as much as i can. Great. I look forward to seeing the results. > > it looks ok in terms of throughput. > before i continue to a non c

improving raid 5 performance

2006-05-23 Thread Raz Ben-Jehuda(caro)
Neil hello. 1. i have applied the common path according to http://www.spinics.net/lists/raid/msg11838.html as much as i can. it looks ok in terms of throughput. before i continue to a non common path ( step 3 ) i do not understand raid0_mergeable_bvec entirely. as i understand the code checks a