Re: Failed Hard Disk... help!

2006-06-10 Thread Ric Wheeler



David M. Strang wrote:


/Patrick wrote:


pretty sure smartctl -d ata -a /dev/sdwhatever will tell you the
serial number. (Hopefully the kernel is new enough that it supports
SATA/smart, otherwise you need a kernel patch which won't be any 
better...)



Yep... 2.6.15 or better... I need the magical patch =\.

Any other options?


If you have an up dated copy of hdparm, you can use it against libata 
SCSI drives to get the serial number:


   # hdparm -V
   hdparm v5.7

   # hdparm -I /dev/sda


   /dev/sda:

   ATA device, with non-removable media
   Model Number:   Maxtor 7L320S0
   Serial Number:  L616D6YH
   Firmware Revision:  BACE1G70
   (and so on)




-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID5 troubles

2006-06-10 Thread osk
Hi

I have three SATA 250 GB disks with two partitions on each one. 
The first partition  from each disk form a RAID1 array which is my root 
partition. The second partition of each disk form a RAID5.
Everything was working fine, then I had to power off/on the machine as it 
was hanging. At next boot the RAID1 array was degraded but running,
mdadm /dev/md0--add /dev/sdc2 fixed it.

The RAID5 array is still a problem, dmesg says:

md: created md1
md: bindsda2
md: bindsdb2
md: bindsdc2
md: running: sdc2sdb2sda2
md: kicking non-fresh sdc2 from array!
md: unbindsdc2
md: export_rdev(sdc2)
md: md1: raid array is not clean -- starting background reconstruction
raid5: device sdb2 operational as raid disk 0
raid5: device sda2 operational as raid disk 1
raid5: cannot start dirty degraded array for md1
RAID5 conf printout:
 --- rd:3 wd:2 fd:1
 disk 0, o:1, dev:sdb2
 disk 1, o:1, dev:sda2
raid5: failed to run raid set md1
md: pers-run() failed ...
md: do_md_run() returned -5
md: md1 stopped.
md: unbindsdb2
md: export_rdev(sdb2)
md: unbindsda2
md: export_rdev(sda2)

mdadm --examine for those three partions:

/dev/sda2:
  Magic : a92b4efc
Version : 00.90.03
   UUID : 1e721d7b:1ed2152e:1f8e55e3:a881163a
  Creation Time : Fri May 12 21:36:04 2006
 Raid Level : raid5
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

Update Time : Fri Jun  2 23:41:26 2006
  State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
   Checksum : 45e5f094 - correct
 Events : 0.67315

 Layout : left-symmetric
 Chunk Size : 64K

  Number   Major   Minor   RaidDevice State
this 1   821  active sync   /dev/sda2

   0 0   8   180  active sync   /dev/sdb2
   1 1   821  active sync   /dev/sda2
   2 2   002  faulty removed

/dev/sdb2:
  Magic : a92b4efc
Version : 00.90.03
   UUID : 1e721d7b:1ed2152e:1f8e55e3:a881163a
  Creation Time : Fri May 12 21:36:04 2006
 Raid Level : raid5
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

Update Time : Fri Jun  2 23:41:26 2006
  State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
   Checksum : 45e5f0a2 - correct
 Events : 0.67315

 Layout : left-symmetric
 Chunk Size : 64K

  Number   Major   Minor   RaidDevice State
this 0   8   180  active sync   /dev/sdb2

   0 0   8   180  active sync   /dev/sdb2
   1 1   821  active sync   /dev/sda2
   2 2   002  faulty removed

/dev/sdc2:
  Magic : a92b4efc
Version : 00.90.03
   UUID : 1e721d7b:1ed2152e:1f8e55e3:a881163a
  Creation Time : Fri May 12 21:36:04 2006
 Raid Level : raid5
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

Update Time : Fri Jun  2 20:36:31 2006
  State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
   Checksum : 45e6cc45 - correct
 Events : 0.67313

 Layout : left-symmetric
 Chunk Size : 64K

  Number   Major   Minor   RaidDevice State
this 2   8   342  active sync   /dev/sdc2

   0 0   8   180  active sync   /dev/sdb2
   1 1   821  active sync   /dev/sda2
   2 2   8   342  active sync   /dev/sdc2


Is there any way to recover?

I'm using
Kernel: 2.6.16.16
mdadm: v1.12.0

Thanks for your time.

Regards,
Chris
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html