Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-20 Thread Tom H
On Fri, Jun 18, 2010 at 9:11 AM, Huang, Tao deb...@huangtao.me wrote:
 On Fri, Jun 18, 2010 at 6:02 PM, Tom H tomh0...@gmail.com wrote:
 [snip]
 mdadm assembles an array according to data in the superblock so it
 shouldn't matter whether the kernel recognizes sda and sdb as sdb and
 sda respectively should you plug them in differently.

 so they's recognized with data in the superblock,
 that even uuid doesn't matter?

UUIDs are held in superblocks.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimnsbxvxjuu4lerzkledis0oe470dtpu4y2y...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-18 Thread Michal
On 17/06/2010 14:08, Huang, Tao wrote:
 On Thu, Jun 17, 2010 at 4:17 PM, Michal mic...@ionic.co.uk wrote:
 This is a better way then disconnecting the drive and checking which
 drive was disconnected like I did, but I would still put a very easy to
 read label on the drive to say /dev/sdX. It would be far easier then
 checking a long serial number, especially if it's hard to read and you'd
 need to take each HDD out to check :)
 
 I think the allocating of /dev/sdX depends on the order you plug the
 drives into the machine.
 so it changes over reconfiguring of the hardwares, which makes your
 labels useless.
 
 can someone confirm this?
 
 
 Tao
 
 

But how can this be correct when each raid partion is linked to the
HDD/Partions


# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid1 sda2[0] sdb2[1]
  716796096 blocks [2/2] [UU]

md2 : active raid1 sda5[0] sdb5[1]
  51199040 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
  513984 blocks [2/2] [UU]

md1 : active raid1 sda3[0] sdb3[1]
  102398208 blocks [2/2] [UU]

unused devices: none


for example?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c1b2e06.5060...@ionic.co.uk



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-18 Thread Tom H
On Fri, Jun 18, 2010 at 4:27 AM, Michal mic...@ionic.co.uk wrote:
 On 17/06/2010 14:08, Huang, Tao wrote:
 On Thu, Jun 17, 2010 at 4:17 PM, Michal mic...@ionic.co.uk wrote:
 This is a better way then disconnecting the drive and checking which
 drive was disconnected like I did, but I would still put a very easy to
 read label on the drive to say /dev/sdX. It would be far easier then
 checking a long serial number, especially if it's hard to read and you'd
 need to take each HDD out to check :)

 I think the allocating of /dev/sdX depends on the order you plug the
 drives into the machine.
 so it changes over reconfiguring of the hardwares, which makes your
 labels useless.

 can someone confirm this?


 Tao



 But how can this be correct when each raid partion is linked to the
 HDD/Partions


 # cat /proc/mdstat
 Personalities : [raid1] [raid6] [raid5] [raid4]
 md3 : active raid1 sda2[0] sdb2[1]
      716796096 blocks [2/2] [UU]

 md2 : active raid1 sda5[0] sdb5[1]
      51199040 blocks [2/2] [UU]

 md0 : active raid1 sda1[0] sdb1[1]
      513984 blocks [2/2] [UU]

 md1 : active raid1 sda3[0] sdb3[1]
      102398208 blocks [2/2] [UU]

mdadm assembles an array according to data in the superblock so it
shouldn't matter whether the kernel recognizes sda and sdb as sdb and
sda respectively should you plug them in differently.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktikynxk1ht0m-1azgipknzqvoyi-od2sxixth...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-18 Thread Michal
 But how can this be correct when each raid partion is linked to the
 HDD/Partions


 # cat /proc/mdstat
 Personalities : [raid1] [raid6] [raid5] [raid4]
 md3 : active raid1 sda2[0] sdb2[1]
  716796096 blocks [2/2] [UU]

 md2 : active raid1 sda5[0] sdb5[1]
  51199040 blocks [2/2] [UU]

 md0 : active raid1 sda1[0] sdb1[1]
  513984 blocks [2/2] [UU]

 md1 : active raid1 sda3[0] sdb3[1]
  102398208 blocks [2/2] [UU]
 
 mdadm assembles an array according to data in the superblock so it
 shouldn't matter whether the kernel recognizes sda and sdb as sdb and
 sda respectively should you plug them in differently.
 
 

A good point. Noted


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c1b44c2.8010...@ionic.co.uk



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-18 Thread Huang, Tao
On Fri, Jun 18, 2010 at 6:02 PM, Tom H tomh0...@gmail.com wrote:
[snip]
 mdadm assembles an array according to data in the superblock so it
 shouldn't matter whether the kernel recognizes sda and sdb as sdb and
 sda respectively should you plug them in differently.

so they's recognized with data in the superblock,
that even uuid doesn't matter?


Tao
--
http://huangtao.me/
http://www.google.com/profiles/UniIsland


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimfqhxqdjrq5ljcwwfwtltwsb6ak_ydd2bzf...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-18 Thread Rob Owens
On Thu, Jun 17, 2010 at 09:08:41PM +0800, Huang, Tao wrote:
 On Thu, Jun 17, 2010 at 4:17 PM, Michal mic...@ionic.co.uk wrote:
  This is a better way then disconnecting the drive and checking which
  drive was disconnected like I did, but I would still put a very easy to
  read label on the drive to say /dev/sdX. It would be far easier then
  checking a long serial number, especially if it's hard to read and you'd
  need to take each HDD out to check :)
 
 I think the allocating of /dev/sdX depends on the order you plug the
 drives into the machine.
 so it changes over reconfiguring of the hardwares, which makes your
 labels useless.
 
 can someone confirm this?
 
I had something like this happen on a Lenny amd64 system.  The drive
identifications (/dev/sdX) switched after I performed a kernel upgrade.
If I booted the old kernel, they were back to normal.  That's when I
learned about UUID's...

-Rob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100618192626.ga10...@aurora.owens.net



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread Steven

On Wed, June 16, 2010 17:30, Michal wrote:

 Sorry I really didnt explain my self propely;

 Yes I mean /dev/sde and by lable I mean get a lable machine (or
 somehting similar) to put a physical lable on the drive, like a sticker
 with text saying /dev/sde

 I did this in one machine and simply built my RAID1 array across two
 drives, disconnected a drive, booted back up check mdstat to see which
 one was now disconnected and labled that one, then labled the second
 one. It's not a brilliant way I will admit but it works perfectly well.
 I tested it 3 times (connecting the drive back, rebuild array,
 disconnecting the other drive etc) to really make sure I had labled them
 correctly.

Ah, now I get it, I had no idea how to know which drive to put the right
label on.

Thanks.


-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/10288.91.183.48.98.1276758593.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread Steven

On Wed, June 16, 2010 17:30, Michal wrote:

 Sorry I really didnt explain my self propely;

 Yes I mean /dev/sde and by lable I mean get a lable machine (or
 somehting similar) to put a physical lable on the drive, like a sticker
 with text saying /dev/sde

 I did this in one machine and simply built my RAID1 array across two
 drives, disconnected a drive, booted back up check mdstat to see which
 one was now disconnected and labled that one, then labled the second
 one. It's not a brilliant way I will admit but it works perfectly well.
 I tested it 3 times (connecting the drive back, rebuild array,
 disconnecting the other drive etc) to really make sure I had labled them
 correctly.

Ah, now I get it, I had no idea how to know which drive to put the right
label on.

Thanks.


-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/10287.91.183.48.98.1276758573.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread Michal
On 16/06/2010 19:00, Håkon Alstadheim wrote:
 Steven skrev:
 How to identify which drive has failed in an array?

 I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
 and /home.
 /dev/sdc
 /dev/sdd
 /dev/sde
 /dev/sdf
 Each have 1 partition.
 /dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
 /dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1

 If a drive fails, how do I know which drive? This is a desktop system,
 not
 a server.

   
 
 Just do ls -l /dev/disk/by-id/. The disks will have factory labels
 with serial-numbers to match.
 

This is a better way then disconnecting the drive and checking which
drive was disconnected like I did, but I would still put a very easy to
read label on the drive to say /dev/sdX. It would be far easier then
checking a long serial number, especially if it's hard to read and you'd
need to take each HDD out to check :)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c19da2d.1090...@ionic.co.uk



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread Steven

On Thu, June 17, 2010 10:17, Michal wrote:
 On 16/06/2010 19:00, Håkon Alstadheim wrote:
 Just do ls -l /dev/disk/by-id/. The disks will have factory labels
 with serial-numbers to match.


 This is a better way then disconnecting the drive and checking which
 drive was disconnected like I did, but I would still put a very easy to
 read label on the drive to say /dev/sdX. It would be far easier then
 checking a long serial number, especially if it's hard to read and you'd
 need to take each HDD out to check :)

Excellent, thank you both, this seems like the fastest/best way.
Backups and RAID is one thing, but they're both useless if you can't
recover :)

Kind regards,
Steven

PS. I hope I fixed the duplicate mail issue now.


-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/1366.194.7.9.50.1276768902.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread martin f krafft
also sprach Michal mic...@ionic.co.uk [2010.06.17.1017 +0200]:
 This is a better way then disconnecting the drive and checking which
 drive was disconnected like I did, but I would still put a very easy to
 read label on the drive to say /dev/sdX. It would be far easier then
 checking a long serial number, especially if it's hard to read and you'd
 need to take each HDD out to check :)

Instead, I suggest you stop using /dev/sdX everywhere and only use
/dev/disk/by-id/*. And/or file a bug against the kernel to request
that /proc/mdstat should list the ID.

-- 
 .''`.   martin f. krafft madd...@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
logik ist analsadismus: gedanken werden gewaltsam
durch einen engen gang gepreßt.
-- frei nach lacan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100617115801.gd25...@fishbowl.rw.madduck.net



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-17 Thread Huang, Tao
On Thu, Jun 17, 2010 at 4:17 PM, Michal mic...@ionic.co.uk wrote:
 This is a better way then disconnecting the drive and checking which
 drive was disconnected like I did, but I would still put a very easy to
 read label on the drive to say /dev/sdX. It would be far easier then
 checking a long serial number, especially if it's hard to read and you'd
 need to take each HDD out to check :)

I think the allocating of /dev/sdX depends on the order you plug the
drives into the machine.
so it changes over reconfiguring of the hardwares, which makes your
labels useless.

can someone confirm this?


Tao


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimc8j0kf165dwkqryugtckavjopycu3agoea...@mail.gmail.com



Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Siju George
Hope some one finds this helpful :-)

--Siju

Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault.
=

** Actual screen shot from terminal of steps taken during rebuild on
10-June-2010 on Debian Lenny ( Linux )**


1) Check the partitions layout on the current hard disk



srv1:~# fdisk /dev/sda

The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdd6e

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   1 122  979933+  fd  Linux raid autodetect
/dev/sda2 1231338 9767520   fd  Linux raid autodetect
/dev/sda313392554 9767520   fd  Linux raid autodetect
/dev/sda42555   60801   467869027+  fd  Linux raid autodetect

Command (m for help):  quit

srv1:~#



2) Create identical partitions on the new disk using 'fdisk'.



Partition Id should be 'fd' for all RAID partitions. The resulting
layout should look like.

srv1:~# fdisk /dev/sdb

The number of cylinders for this disk is set to 60801.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xe3a3a447

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1 122  979933+  fd  Linux raid autodetect
/dev/sdb2 1231338 9767520   fd  Linux raid autodetect
/dev/sdb313392554 9767520   fd  Linux raid autodetect
/dev/sdb42555   60801   467869027+  fd  Linux raid autodetect

Command (m for help): q

srv1:~#



3) Check the current RAID status

srv1:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sda4[1]
  467868928 blocks [2/1] [_U]

md2 : active raid1 sda3[1]
  9767424 blocks [2/1] [_U]

md1 : active raid1 sda2[1]
  9767424 blocks [2/1] [_U]

md0 : active raid1 sda1[1]
  979840 blocks [2/1] [_U]

unused devices:
srv1:~#

4) Rebuild the arrays and check thr status

srv1:~# mdadm -a /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1
srv1:~# mdadm -a /dev/md1 /dev/sdb2
mdadm: added /dev/sdb2
srv1:~# mdadm -a /dev/md2 /dev/sdb3
mdadm: added /dev/sdb3

srv1:~# mdadm -a /dev/md3 /dev/sdb4
mdadm: added /dev/sdb4

srv1:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdb4[2] sda4[1]
  467868928 blocks [2/1] [_U]
  []  recovery =  0.0% (285440/467868928)
finish=54.5min speed=142720K/sec

md2 : active raid1 sdb3[0] sda3[1]
  9767424 blocks [2/2] [UU]

md1 : active raid1 sdb2[0] sda2[1]
  9767424 blocks [2/2] [UU]

md0 : active raid1 sdb1[0] sda1[1]
  979840 blocks [2/2] [UU]

unused devices:
srv1:~#

5) Install grub on the MBR of new hard disk

srv1:~# grub-install /dev/sdb
Searching for GRUB installation directory ... found: /boot/grub
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.

(hd0)   /dev/sda
(hd1)   /dev/sdb
srv1:~#


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktinfezni1ezq0vkdo4ryhh7uolrugaaje9_l1...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread martin f krafft
also sprach Siju George sgeorge...@gmail.com [2010.06.16.1313 +0200]:
 2) Create identical partitions on the new disk using 'fdisk'.

sfdisk -d /dev/sda | sfdisk /dev/sdb

-- 
 .''`.   martin f. krafft madd...@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
i always choose my friends for their good looks and my enemies for
 their good intellects. man cannot be too careful in his choice of
 enemies.
  -- oscar wilde


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20100616111821.gb5...@piper.oerlikon.madduck.net



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Siju George
On Wed, Jun 16, 2010 at 4:48 PM, martin f krafft madd...@debian.org wrote:
 also sprach Siju George sgeorge...@gmail.com [2010.06.16.1313 +0200]:
 2) Create identical partitions on the new disk using 'fdisk'.

 sfdisk -d /dev/sda | sfdisk /dev/sdb


oh thanks :-)

I did it manually using fdisk

--Siju


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktilgnr-wlrnoqkmxlpyou0ep_vb-h3vlm4o6g...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread martin f krafft
also sprach Siju George sgeorge...@gmail.com [2010.06.16.1322 +0200]:
  sfdisk -d /dev/sda | sfdisk /dev/sdb
 
 oh thanks :-)
 
 I did it manually using fdisk

Manually is for Mac users. ;)

-- 
 .''`.   martin f. krafft madd...@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
work like you don't need the money
love like you have never been hurt
dance like there's nobody watching


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Siju George
On Wed, Jun 16, 2010 at 5:06 PM, martin f krafft madd...@debian.org wrote:
 also sprach Siju George sgeorge...@gmail.com [2010.06.16.1322 +0200]:
  sfdisk -d /dev/sda | sfdisk /dev/sdb

 oh thanks :-)

 I did it manually using fdisk

 Manually is for Mac users. ;)


these days every one has left windows and are picking on Mac ? :-)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktimgxdr683grsg4piso_pivu6pjwjkaspjgsm...@mail.gmail.com



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread martin f krafft
also sprach Siju George sgeorge...@gmail.com [2010.06.16.1402 +0200]:
  Manually is for Mac users. ;)
 
 these days every one has left windows and are picking on Mac ? :-)

Reinstalling is for Windows users.

-- 
 .''`.   martin f. krafft madd...@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
the reason the mainstream is thought of as a stream
is because it is so shallow.


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/)


Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Steven

On Wed, June 16, 2010 13:13, Siju George wrote:
 Hope some one finds this helpful :-)

 --Siju

 Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault.
 =


Thanks, this might prove useful.
However I do have a question... which might be just as important.

How to identify which drive has failed in an array?

I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
and /home.
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf
Each have 1 partition.
/dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
/dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1

If a drive fails, how do I know which drive? This is a desktop system, not
a server.

-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/23452.91.183.48.98.1276695172.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Michal

 
 Thanks, this might prove useful.
 However I do have a question... which might be just as important.
 
 How to identify which drive has failed in an array?
 
 I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
 and /home.
 /dev/sdc
 /dev/sdd
 /dev/sde
 /dev/sdf
 Each have 1 partition.
 /dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
 /dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1
 
 If a drive fails, how do I know which drive? This is a desktop system, not
 a server.
 

One way is to label the disks themselves so you simply do;

cat /proc/mdstat which might say /dev/sd3 is down. Open the case, look
for the disk labled /dev/sde and replace it. If you have LED's like
servers have (probably not) they can be a fiddle to get working but it's
possible


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c18d5fb.2000...@ionic.co.uk



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Steven

On Wed, June 16, 2010 13:13, Siju George wrote:
 Hope some one finds this helpful :-)

 --Siju

 Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault.
 =


Thanks, this might prove useful.
However I do have a question... which might be just as important.

How to identify which drive has failed in an array?

I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
and /home.
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf
Each have 1 partition.
/dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
/dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1

If a drive fails, how do I know which drive? This is a desktop system, not
a server.

-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/23451.91.183.48.98.1276695152.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Steven

On Wed, June 16, 2010 15:47, Michal wrote:

 One way is to label the disks themselves so you simply do;

 cat /proc/mdstat which might say /dev/sd3 is down. Open the case, look
 for the disk labled /dev/sde and replace it. If you have LED's like
 servers have (probably not) they can be a fiddle to get working but it's
 possible

No LED's for drives, it already has them for every pci slot,
looks like a Christmas tree :)

I think you meant /dev/sde instead of sd3, right? If not, please correct me.
If I'm not mistaken, mdadm will report the broken drive,
then I have to look for the drive that corresponds to the 4th sata slot on
the motherboard.
That's part of my issue, can I be sure that the drive connected to port 4
is /dev/sde?
It's not a problem for the other 2 drives, as they differ in capacity,
but these 4 are exactly the same size.

Also how accurate is mdadm in identifying the failed drive?
As there are only 2 in an array, there is only 1 copy of the data to
compare to.

It also seems my last message was sent twice, sorry about that.

-- 
Rarely do people communicate; they just take turns talking.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/23813.91.183.48.98.1276699820.squir...@stevenleeuw.kwik.to



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Michal
On 16/06/2010 15:50, Steven wrote:
 
 On Wed, June 16, 2010 15:47, Michal wrote:

 One way is to label the disks themselves so you simply do;

 cat /proc/mdstat which might say /dev/sd3 is down. Open the case, look
 for the disk labled /dev/sde and replace it. If you have LED's like
 servers have (probably not) they can be a fiddle to get working but it's
 possible

 No LED's for drives, it already has them for every pci slot,
 looks like a Christmas tree :)
 
 I think you meant /dev/sde instead of sd3, right? If not, please correct me.
 If I'm not mistaken, mdadm will report the broken drive,
 then I have to look for the drive that corresponds to the 4th sata slot on
 the motherboard.
 That's part of my issue, can I be sure that the drive connected to port 4
 is /dev/sde?
 It's not a problem for the other 2 drives, as they differ in capacity,
 but these 4 are exactly the same size.
 
 Also how accurate is mdadm in identifying the failed drive?
 As there are only 2 in an array, there is only 1 copy of the data to
 compare to.
 
 It also seems my last message was sent twice, sorry about that.
 

Sorry I really didnt explain my self propely;

Yes I mean /dev/sde and by lable I mean get a lable machine (or
somehting similar) to put a physical lable on the drive, like a sticker
with text saying /dev/sde

I did this in one machine and simply built my RAID1 array across two
drives, disconnected a drive, booted back up check mdstat to see which
one was now disconnected and labled that one, then labled the second
one. It's not a brilliant way I will admit but it works perfectly well.
I tested it 3 times (connecting the drive back, rebuild array,
disconnecting the other drive etc) to really make sure I had labled them
correctly.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c18ee14.6060...@ionic.co.uk



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Bob Weber
 Use smartctl from the smartmontools package.  If mdadm says that /dev/sdc (or 
cat /proc/mdstat) is at fault then use smartctl -a /dev/sdc and it will print 
out all kinds of info on the drive including its serial number which should be 
on a sticker on the case of the drive.


The programs included with smartmontools might have warned you of an impending 
failure.  I have a smart self long test run om my drives 2 times a week.


*...Bob*

On 06/16/2010 09:32 AM, Steven wrote:

On Wed, June 16, 2010 13:13, Siju George wrote:

Hope some one finds this helpful :-)

--Siju

Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault.
=


Thanks, this might prove useful.
However I do have a question... which might be just as important.

How to identify which drive has failed in an array?

I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
and /home.
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf
Each have 1 partition.
/dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
/dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1

If a drive fails, how do I know which drive? This is a desktop system, not
a server.



Re: Rebuilding RAID 1 Array in Linux with a new hard disk after a disk fault - Howto with screen shots

2010-06-16 Thread Håkon Alstadheim

Steven skrev:

How to identify which drive has failed in an array?

I have 6 disks, 4 are used in raid (mdadm), the other 2 contain /boot, /
and /home.
/dev/sdc
/dev/sdd
/dev/sde
/dev/sdf
Each have 1 partition.
/dev/md0 (raid 1) consists of /dev/sdc1 and /dev/sdd1
/dev/md1 (raid 1) consists of /dev/sde1 and /dev/sdf1

If a drive fails, how do I know which drive? This is a desktop system, not
a server.

  


Just do ls -l /dev/disk/by-id/. The disks will have factory labels 
with serial-numbers to match.


--
Håkon Alstadheim / N-7510 Skatval / email:ha...@alstadheim.priv.no
tlf: 74 82 60 27 mob: 47 35 39 38
http://alstadheim.priv.no/hakon/ 
spamtrap: finnesi...@alstadheim.priv.no -- 1 hit  you are out




--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4c191159.8080...@alstadheim.priv.no