Re: [BUG] Raid1/5 over iSCSI trouble

2007-10-24 Thread David Miller
From: Dan Williams [EMAIL PROTECTED]
Date: Wed, 24 Oct 2007 16:49:28 -0700

 Hopefully it is as painless to run on sparc as it is on IA:
 
 opcontrol --start --vmlinux=/path/to/vmlinux
 wait
 opcontrol --stop
 opreport --image-path=/lib/modules/`uname -r` -l

It is painless, I use it all the time.

The only caveat is to make sure the /path/to/vmlinux is
the pre-stripped kernel image.  The images installed
under /boot/ are usually stripped and thus not suitable
for profiling.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID6 mdadm --grow bug?

2007-09-13 Thread David Miller

Neil,

On RHEL5 the kernel is 2.6.18-8.1.8. On Ubuntu 7.04 the kernel is  
2.6.20-16. Someone on the Arstechnica forums wrote they see the same  
thing in Debian etch running kernel 2.6.18. Below is a messages log  
from the RHEL5 system. I have only included the section for creating  
the RAID6, adding a spare and trying to grow it. There is a one line  
error when I do the mdadm --grow command. It is md: couldn't  
update array info. -22.


md: bindloop1
md: bindloop2
md: bindloop3
md: bindloop4
md: md0: raid array is not clean -- starting background reconstruction
raid5: device loop4 operational as raid disk 3
raid5: device loop3 operational as raid disk 2
raid5: device loop2 operational as raid disk 1
raid5: device loop1 operational as raid disk 0
raid5: allocated 4204kB for md0
raid5: raid level 6 set md0 active with 4 out of 4 devices, algorithm 2
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, o:1, dev:loop1
 disk 1, o:1, dev:loop2
 disk 2, o:1, dev:loop3
 disk 3, o:1, dev:loop4
md: syncing RAID array md0
md: minimum _guaranteed_ reconstruction speed: 1000 KB/sec/disc.
md: using maximum available idle IO bandwidth (but not more than  
20 KB/sec) for reconstruction.

md: using 128k window, over a total of 102336 blocks.
md: md0: sync done.
RAID5 conf printout:
 --- rd:4 wd:4 fd:0
 disk 0, o:1, dev:loop1
 disk 1, o:1, dev:loop2
 disk 2, o:1, dev:loop3
 disk 3, o:1, dev:loop4
md: bindloop5
md: couldn't update array info. -22

David.



On Sep 13, 2007, at 3:52 AM, Neil Brown wrote:


On Wednesday September 12, [EMAIL PROTECTED] wrote:



Problem:

The mdadm --grow command fails when trying to add disk to a RAID6.


..


So far I have replicated this problem on RHEL5 and Ubuntu 7.04
running the latest official updates and patches. I have even tried it
with the most latest version of mdadm 2.6.3 under RHEL5. RHEL5 uses
version 2.5.4.


You don't say what kernel version you are using (as I don't use RHEL5
or Ubunutu, I don't know what 'latest' means).

If it is 2.6.23-rcX, then it is a known problem that should be fixed
in the next -rc.  If it is something else... I need details.

Also, any kernel message (run 'dmesg') might be helpful.

NeilBrown


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID6 mdadm --grow bug?

2007-09-12 Thread David Miller



Problem:

The mdadm --grow command fails when trying to add disk to a RAID6.

The man page says it can do this.

GROW MODE
   The GROW mode is used for changing the size or shape of an  
active array.  For this to  work,
   the  kernel must support the necessary change.  Various types  
of growth are being added dur-
   ing 2.6 development, including restructuring a raid5 array to  
have more active devices.


   Currently the only support available is to

   ·   change the size attribute for RAID1, RAID5 and RAID6.

   ·   increase the raid-disks attribute of RAID1, RAID5, and  
RAID6.


   ·   add a write-intent bitmap to any array which supports  
these bitmaps, or remove a  write-

   intent bitmap from such an array.


So far I have replicated this problem on RHEL5 and Ubuntu 7.04  
running the latest official updates and patches. I have even tried it  
with the most latest version of mdadm 2.6.3 under RHEL5. RHEL5 uses  
version 2.5.4.


How to replicate the problem:

You can either use real physical disks or use the loopback device to  
create fake disks.


Here are the steps using the loopback method as root.

cd /tmp
dd if=/dev/zero of=rd1 bs=10240 count=10240
cp rd1 rd2;cp rd1 rd3;cp rd1 rd4;cp rd1 rd5
losetup /dev/loop1 rd1;losetup /dev/loop2 rd2;losetup /dev/loop3  
rd3;losetup /dev/loop4 rd4;losetup /dev/loop5 rd5
mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/ 
loop1 /dev/loop2 /dev/loop3 /dev/loop4


At this point wait a minute while the raid is being built.

mdadm --add /dev/md0 /dev/loop5
mdadm --grow /dev/md0 --raid-devices=5

You should get the following error

mdadm: Need to backup 384K of critical section..
mdadm: Cannot set device size/shape for /dev/md0: Invalid argument

How to clean up

mdadm --stop /dev/md0
mdadm --remove /dev/md0
losetup -d /dev/loop1;losetup -d /dev/loop2;losetup -d /dev/ 
loop3;losetup -d /dev/loop4;losetup -d /dev/loop5

rm rd1 rd2 rd3 rd4 rd5

David.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 kernel panic on sparc64

2007-04-12 Thread David Miller
From: Jan Engelhardt [EMAIL PROTECTED]
Date: Mon, 2 Apr 2007 02:15:57 +0200 (MEST)

 Kernel is kernel-smp-2.6.16-1.2128sp4.sparc64.rpm from Aurora Corona.
 Perhaps it helps, otherwise hold your breath until I reproduce it.

Jan, if you can reproduce this with the current 2.6.20 vanilla
kernel I'd be very interested in a full trace so that I can
try to fix this.

With the combination of an old kernel and only part of the
crash trace, there isn't much I can do with this report.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10 kernel panic on sparc64

2007-04-01 Thread David Miller
From: Jan Engelhardt [EMAIL PROTECTED]
Date: Mon, 2 Apr 2007 02:15:57 +0200 (MEST)

 just when I did
 # mdadm -C /dev/md2 -b internal -e 1.0 -l 10 -n 4 /dev/sd[cdef]4
 (created)
 # mdadm -D /dev/md2
 Killed
 
 dmesg filled up with a kernel oops. A few seconds later, the box
 locked solid. Since I was only in by ssh and there is not (yet) any
 possibility to reset it remotely, this is all I can give right now,
 the last 80x25 screen:

Unfortunately the beginning of the OOPS is the most important part,
that says where exactly the kernel died, the rest of the log you
showed only gives half the registers and the rest of the call trace.

Please try to capture the whole thing.

Please also provide hardware type information as well, which you
should give in any bug report like this.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: BLK_DEV_MD with CONFIG_NET

2007-03-20 Thread David Miller
From: Randy Dunlap [EMAIL PROTECTED]
Date: Tue, 20 Mar 2007 20:05:38 -0700

 Build a kernel with CONFIG_NET-n and CONFIG_BLK_DEV_MD=m.
 Unless csum_partial() is built and kept by some arch Makefile,
 the result is:
 ERROR: csum_partial [drivers/md/md-mod.ko] undefined!
 make[1]: *** [__modpost] Error 1
 make: *** [modules] Error 2
 
 
 Any suggested solutions?

Anything which is every exported to modules, which ought to
be the situation in this case, should be obj-y not lib-y
right?
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html