raidreconf aborted after being almost done

2005-03-10 Thread yejj8xgg
I tried to add a 6th disk to a RAID-5 with raidreconf 0.1.2

Almost being done raidreconf aborted with the error message:

raid5_map_global_to_local: disk 0 block out of range: 2442004 (2442004)
gblock = 7326012
aborted

After searching the web I believe this is due to different disk sizes. Because I
use different disks (vendor and type) having different geometries it is not
possible to have partitions of exact the same size. They match as good as
possible but some always have different amounts of blocks.
It would be great if raidreconf would complain about the different disk sizes
and abort prior to messing up the disks.

Is there any way I can recover my RAID device? I tried raidstart on it but it
started using only the old setup with 5 disks without including the new one. How
do I start the array including the 6th disk?

Regards
Klaus
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problem with auto-assembly on Itanium

2005-03-10 Thread Jimmy Hedman
On Wed, 2005-03-09 at 17:43 +0100, Luca Berra wrote:
 On Wed, Mar 09, 2005 at 11:28:48AM +0100, Jimmy Hedman wrote:
 Is there any way i can make this work? Could it be doable with mdadm in
 a initrd?
 
 mdassembled was devise for this purpose.
 
 create an /etc/mdadm.conf with
 echo DEVICE partitions  /etc/mdadm.conf
 /sbin/mdadm -D -b /dev/md0 | grep '^ARRAY'  /etc/mdadm.conf
 
 copy the mdadm.conf and mdassemble to initrd
 make linuxrc run mdassemble.

So there are no way of doing it the same way i386 does it, ie scanning
the partitions and assembly the raid by it self? Is this a bug on the
itanium (GPT partition scheme) or is this intentional?

// Jimmy





 L.
 

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raidreconf aborted after being almost done

2005-03-10 Thread Frank Wittig
[EMAIL PROTECTED] wrote:
After searching the web I believe this is due to different disk sizes. Because I
use different disks (vendor and type) having different geometries it is not
possible to have partitions of exact the same size. They match as good as
possible but some always have different amounts of blocks.
AFAIK the size of a block differs between disks with an unequal number 
of heads. I had this problem with some identical drives. (some with 16 
heads and some with 255)
It is possible to set these parameters with fdisk and have drives with 
equal geometry or (in case of different models) with comparible geometry 
which allows to have partitions of exactly the same size.

fdisk --help doesn't mention this possibility but ists manpage does:
Run fdisk with
fdisk -b sectorsize -C cyls -H heads -S sects device
and alter the partition table. Then exit with writing.
After that your disk has a new geometry.
Greetings,
Frank


signature.asc
Description: OpenPGP digital signature


Re: md Grow for Raid 5

2005-03-10 Thread David Greaves
Neil Brown wrote:
Growing a raid5 or raid6 by adding another drive is conceptually
possible to do while the array is online, but I have not definite
plans to do this (I would like to).  Growing a raid5 into a raid6
would also be useful.
These require moving lots of data around, and need to be able to cope
with drive failure and system crash a fun project..
 

EVMS has this already.
It works and is supported (whereas I didn't think raidreconf was).
It would be nice to move the EVMS raid5 extension code into the core md.
FYI, I used EVMS briefly and found it to be an excellent toolset. It's a 
teeny bit rough and a bit OTT for a personal server though so I'm 
sticking with md/lvm2 for now :)

David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


shuffled disks by mistake

2005-03-10 Thread Max Waterman
Hi,
I have 6 WD800Jb disk drives. I used 4 of them in a RAID5 (using the 
whole disk - no partitions) array.

I have mixed them all up, and now want to get some data off the array.
How best to find out which drives were in the array?
Here are the partition tables (obtained using fdisk on OS X):
WCAHL6712963.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0x0
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

 1: 000   0   0 -0   0   0 [ 0 -  0] unused
 2: 000   0   0 -0   0   0 [ 0 -  0] unused
 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
WCAHL6713265.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0x6972
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

 1: 000   0   0 -0   0   0 [ 0 -  0] unused
 2: 69 1023  53  45 - -108478 -118  -1 [1210083443 - 1342177348] Novell 

 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
WCAHL6727415.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0x0
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

 1: 000   0   0 -0   0   0 [ 0 -  0] unused
 2: 000   0   0 -0   0   0 [ 0 -  0] unused
 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
WCAHL6731043.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0x6972
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

 1: 000   0   0 -0   0   0 [ 0 -  0] unused
 2: 69 1023  53  45 - -108478 -118  -1 [1210083443 - 1342177348] Novell 

 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
WCAJ93156707.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0xAA55
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

*1: 830   1   1 -   12 254  63 [63 - 208782] Linux files*
 2: 8E   13   0   1 - 1023 254  63 [208845 -  156087540] Unknown ID
 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
WMA8E2951092.txt
Disk: /dev/rdisk2   geometry: 9729/255/63 [156301488 sectors]
Signature: 0x0
 Starting   Ending
 #: id  cyl  hd sec -  cyl  hd sec [ start -   size]

 1: 000   0   0 -0   0   0 [ 0 -  0] unused
 2: 000   0   0 -0   0   0 [ 0 -  0] unused
 3: 000   0   0 -0   0   0 [ 0 -  0] unused
 4: 000   0   0 -0   0   0 [ 0 -  0] unused
Max.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: shuffled disks by mistake

2005-03-10 Thread seth vidal
On Thu, 2005-03-10 at 22:17 +0800, Max Waterman wrote:
 Hi,
 
 I have 6 WD800Jb disk drives. I used 4 of them in a RAID5 (using the 
 whole disk - no partitions) array.
 
 I have mixed them all up, and now want to get some data off the array.
 
 How best to find out which drives were in the array?
 
 Here are the partition tables (obtained using fdisk on OS X):

put the drives in a linux machine and run:

mdadm -E /dev/drive#

it should tell you if there is an md superblock on that system.

-sv


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Problems with Linux RAID in kernel 2.6

2005-03-10 Thread Jesús Rojo Martínez
Hi,

  I have many problems with RAID in kernel 2.6.10.

  First of all, I have the md, raid1,... into the kernel, superblocks in
the RAIDs and Linux RAID autodetect as the partition types. Moreover,
I make an initrd. However, when the kernel boots, it doesn't recognize
the RAID disks:

md: raid1 personality registered as nr 3
md: md driver 0.90.1 MAX_MD_DEVS=256, MD_SB_DISKS=27
[...]
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.

  And I have four RAID disks (/dev/md[1-4]) bind on /dev/sdb[1-4] and
/dev/sdc[1-4].


  Secondly, I try to get up the RAID disks manually and the most times
it fails:

centralmad:~# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: none
centralmad:~# 
centralmad:~# raidstart /dev/md2 
/dev/md2: Invalid argument -- Why did it say it?
centralmad:~# cat /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sdc2[0] sdb2[1] -- Although It seems to run Ok
  14651200 blocks [2/2] [UU]
  
unused devices: none


  And dmesg says:

md: raidstart(pid 2944) used deprecated START_ARRAY ioctl. This will not
-- !!!
be supported beyond 2.6 
-- !!!
md: could not bd_claim sda2.
-- I have a «failured disk»
md: autostart failed!   
-- !!! Is it because the failured dik?
md: raidstart(pid 2944) used deprecated START_ARRAY ioctl. This will not
be supported beyond 2.6
md: autorun ...
md: considering sdc2 ...
md:  adding sdc2 ...
md:  adding sdb2 ...
md: created md2
md: bindsdb2
md: bindsdc2
md: running: sdc2sdb2
raid1: raid set md2 active with 2 out of 2 mirrors
md: ... autorun DONE.

  Maybe raidstart need to change the ioctl to use. I have raidtools2
version 1.00.3-17 (Debian package in Sarge). A strace of that command
shows:

[...]
open(/dev/md2, O_RDWR)= 3
fstat64(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 2), ...}) = 0
ioctl(3, 0x800c0910, 0xbfffefb0)= 0
fstat64(3, {st_mode=S_IFBLK|0660, st_rdev=makedev(9, 2), ...}) = 0
ioctl(3, 0x800c0910, 0xb070)= 0
ioctl(3, 0x400c0930, 0xb100)= -1 EINVAL (Invalid argument)
write(2, mdadm: failed to run array /dev/..., 54mdadm: failed to run
array /dev/md2: Invalid argument
) = 54
close(3)= 0
exit_group(1)   = ?


  Raidstop seems not to have problems:

md: md2 stopped.
md: unbindsdc2
md: export_rdev(sdc2)
md: unbindsdb2
md: export_rdev(sdb2)

  And with mdadm also fails:

centralmad:~# mdadm -R /dev/md2 
mdadm: failed to run array /dev/md2: Invalid argument
centralmad:~# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: none

  Moreover, dmesg says the md driver fails:

md: bug in file drivers/md/md.c, line 1514

md: **
md: * COMPLETE RAID STATE PRINTOUT *
md: **
md2: 
md0: 
md: **


  That line is in the function «static int do_md_run(mddev_t * mddev)»,
and the code that produce the bug is:

if (list_empty(mddev-disks)) {
MD_BUG();
return -EINVAL;
}


  Again, with «mdadm -S /dev/md2» there are no problems to stop the
RAID.


  Here there are more information about my sistem. It is a Debian Sarge
with kernel 2.6, RAID 1 and SATA disks.

centralmad:~# mdadm -E /dev/sdb2 
/dev/sdb2:
  Magic : a92b4efc
Version : 00.90.00
   UUID : 8c51d044:cb84e69a:64968ecd:2e36133c
  Creation Time : Sun Mar  6 16:11:00 2005
 Raid Level : raid1
   Raid Devices : 2
  Total Devices : 2
Preferred Minor : 2

Update Time : Mon Mar  7 17:59:30 2005
  State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 1906699c - correct
 Events : 0.1491


  Number   Major   Minor   RaidDevice State
this 1   8   181  active sync   /dev/sdb2

   0 0   8   340  active sync   /dev/sdc2
   1 1   8   181  active sync   /dev/sdb2



  Why do these problems occur? How can I/You solve them?

  Thanks a lot. I will wait your response.


  Regards,

-- 

--- Jesús Rojo Martínez. ---


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: now on to tuning....

2005-03-10 Thread Derek Piper
Hmm..  for me:

 smartctl -A -d ata /dev/sda

On my work machine with Debian Sarge:

smartctl version 5.32 Copyright (C) 2002-4 Bruce Allen
Home page is http://smartmontools.sourceforge.net/

Smartctl: Device Read Identity Failed (not an ATA/ATAPI device)

A mandatory SMART command failed: exiting. To continue, add one or
more '-T permissive' options.


Did you apply the libata patch? I saw that here:

http://smartmontools.sourceforge.net/#testinghelp

I'm on kernel 2.6.10 and haven't applied any patches.. maybe it's
included on 2.6.11 now or a difference between smartctl 5.32 and 5.33?

The drives I have are on an Intel ICH5 SATA controller. I am doing a
few RAIDed partitions between a couple of 120GB drives since I
reinstalled my work machine a few weeks ago. I think I'm using libata
(the option marked as 'conflicting' with it isn't enabled in my
kernel). Any thoughts?

Derek

On Wed, 9 Mar 2005 11:53:11 +0100, [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:
 Good point about maxing out the pci bus... - I already use the nForce for 
 mirrored boot drives, so that's not an option. The IDE controllers are empty 
 at the moment (save for a DVD drive); I will give this a thought.
 
 Thanks for the feedback,
 
 -P
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Nicola Fankhauser
 Sent: Wednesday, March 09, 2005 10:48 AM
 To: linux-raid@vger.kernel.org
 Subject: Re: now on to tuning
 
 hi peter
 
 [EMAIL PROTECTED] wrote:
  I have been lurking for a while I recently put together a raid 5
  system (Asus K8NE SIL 3114/2.6.8.1 kernel) with 4 300GB SATA Seagate
  drives (a lot smaller than the bulk of what seems to be on this
  list!). Currently this is used for video and mp3 storage, being
  Reiser on LVM2.
 
 beware that LVM2 _can_ affect your performance. I too believed that the
 concept of dynamic drives is good, but I experienced a performance hit
 of about 50% (especially in sequential reads).
 
 see my blog entry describing how I built my 2TB file-server at
 http://variant.ch/phpwiki/WikiBlog/2005-02-27 for some numbers and more
 explanation.
 
 the K8NE has the same SiI 3114 controller as the board I used has; it is
 connected by a 33mhz 32bit PCI bus and maxes out at 133MiB/s, so for
 maxmimum performance you might want to connect only two drives to this
 controller, the other two to the nforce3 chipset SATA ports.
 
  Bonnie++ to test, but with which parameters ?
 
 normally it's enough to specify a test-file larger (e.g. twice) the
 memory capacity of the machine you are testing. for a machine with 1GiB RAM:
 
 # bonnie++ -s 2gb {other options}
 
 you might as well want to specify the fast option which skips per-char
 operations (which are quite useless to test IMHO):
 
 # bonnie++ -f {other options}
 
 HTH
 nicola
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


-- 
Derek Piper - [EMAIL PROTECTED]
http://doofer.org/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problem with auto-assembly on Itanium

2005-03-10 Thread Luca Berra
On Thu, Mar 10, 2005 at 11:03:44AM +0100, Jimmy Hedman wrote:
On Wed, 2005-03-09 at 17:43 +0100, Luca Berra wrote:
On Wed, Mar 09, 2005 at 11:28:48AM +0100, Jimmy Hedman wrote:
Is there any way i can make this work? Could it be doable with mdadm in
a initrd?

mdassembled was devise for this purpose.
create an /etc/mdadm.conf with
echo DEVICE partitions  /etc/mdadm.conf
/sbin/mdadm -D -b /dev/md0 | grep '^ARRAY'  /etc/mdadm.conf
copy the mdadm.conf and mdassemble to initrd
make linuxrc run mdassemble.
So there are no way of doing it the same way i386 does it, ie scanning
the partitions and assembly the raid by it self? Is this a bug on the
itanium (GPT partition scheme) or is this intentional?
if you mean the in-kernel autodetect junk, you should only be happy it
does not work on your system, so you are not tempted to use it.
even on i386 it is badly broken, and i won't return on the subject.
it has been discussed on this list to boredom.
L.
btw. you don't need cc-ing me. i read the list.
L.
--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Convert raid5 to raid1?

2005-03-10 Thread Frank Wittig
John McMonagle wrote:
Just wonder what happens to the md sequence when I remove the original 
raid arrays?
When I'm done will I have md0,md1 and md2 or  md2,md3 and md4?
they will have the name you entered when you created the array.
after removing one array from the system all arrays will still have 
their original device-number after reboot.

greetings,
frank
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Convert raid5 to raid1?

2005-03-10 Thread Brad Campbell
John McMonagle wrote:
Was planning to adding a hot spare to my 3 disk raid5 array and was 
thinking if I go to 4 drives I would be a  better off  as 2 raid1 arrays 
considering the current state of raid5.
I just wonder about the comment considering the current state of raid5. What might be wrong with 
raid5 currently? I certainly know a number of people (me included) who run several large raid-5 
arrays and don't have any problems.

Brad
--
Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so. -- Douglas Adams
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Convert raid5 to raid1?

2005-03-10 Thread Guy
The only problem I have is related to bad blocks.  This problem is common to
all RAID types.  RAID5 is more likely to have problems.

Guy

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Campbell
Sent: Thursday, March 10, 2005 6:04 PM
To: John McMonagle
Cc: linux-raid@vger.kernel.org
Subject: Re: Convert raid5 to raid1?

John McMonagle wrote:
 Was planning to adding a hot spare to my 3 disk raid5 array and was 
 thinking if I go to 4 drives I would be a  better off  as 2 raid1 arrays 
 considering the current state of raid5.

I just wonder about the comment considering the current state of raid5.
What might be wrong with 
raid5 currently? I certainly know a number of people (me included) who run
several large raid-5 
arrays and don't have any problems.

Brad
-- 
Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so. -- Douglas Adams
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problems with Linux RAID in kernel 2.6

2005-03-10 Thread Neil Brown
On Thursday March 10, [EMAIL PROTECTED] wrote:
 Hi,
 
   I have many problems with RAID in kernel 2.6.10.
..
   And dmesg says:
 
 md: raidstart(pid 2944) used deprecated START_ARRAY ioctl. This will not  
 -- !!!
 be supported beyond 2.6   
 -- !!!

Take the hint.  Don't use 'raidstart'.  It seems to work, but it will
fail you when it really counts.  In fact, I think it is failing for
you now.

Use mdadm to assemble your arrays.

 
   And with mdadm also fails:
 
 centralmad:~# mdadm -R /dev/md2 
 mdadm: failed to run array /dev/md2: Invalid argument

You are using mdadm wrongly.  
You want something like:
  mdadm --assemble /dev/md2  /dev/sdb2 /dev/sdc2




 centralmad:~# cat /proc/mdstat 
 Personalities : [raid1] 
 unused devices: none
 
   Moreover, dmesg says the md driver fails:
 
 md: bug in file drivers/md/md.c, line 1514

This is (a rather non-helpful) way of saying that you tried to start
an array which contained no devices.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html