Re: Deleting mdadm RAID arrays

2008-02-08 Thread Marcin Krol
Thursday 07 February 2008 22:35:45 Bill Davidsen napisał(a):
  As you may remember, I have configured udev to associate /dev/d_* devices 
  with
  serial numbers (to keep them from changing depending on boot module loading 
  sequence). 

 Why do you care? 

Because /dev/sd* devices get swapped randomly depending on boot module insertion
sequence, as I explained earlier.

 If you are using UUID for all the arrays and mounts  
 does this buy you anything? 

This is exactly what is not clear for me: what is it that identifies 
drive/partition as part of 
the array? /dev/sd name? UUID as part of superblock? /dev/d_n?

If it's UUID I should be safe regardless of /dev/sd* designation? Yes or no?

 And more to the point, the first time a  
 drive fails and you replace it, will it cause you a problem? Require 
 maintaining the serial to name data manually?

That's not the problem. I just want my array to be intact.

 I miss the benefit of forcing this instead of just building the 
 information at boot time and dropping it in a file.

I would prefer that, too - if it worked. I was getting both arrays messed 
up randomly on boot. messed up in the sense of arrays being composed
of different /dev/sd devices.


  And I made *damn* sure I zeroed all the superblocks before reassembling 
  the arrays. Yet it still shows the old partitions on those arrays!

 As I noted before, you said you had these on whole devices before, did 
 you zero the superblocks on the whole devices or the partitions? From 
 what I read, it was the partitions.

I tried it both ways actually (rebuilt arrays a few times, just udev didn't want
to associate WD-serialnumber-part1 as /dev/d_1p1 as it was told, it still 
claimed
it was /dev/d_1). 

Regards,
Marcin Krol
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-08 Thread Marcin Krol
Friday 08 February 2008 13:44:18 Bill Davidsen napisał(a):

  This is exactly what is not clear for me: what is it that identifies 
  drive/partition as part of 
  the array? /dev/sd name? UUID as part of superblock? /dev/d_n?
 
  If it's UUID I should be safe regardless of /dev/sd* designation? Yes or no?

 Yes, absolutely.

OK, that's what I needed to know. 


Regards,
Marcin Krol
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-06 Thread Marcin Krol
Tuesday 05 February 2008 21:12:32 Neil Brown napisał(a):

  % mdadm --zero-superblock /dev/sdb1
  mdadm: Couldn't open /dev/sdb1 for write - not zeroing
 
 That's weird.
 Why can't it open it?

Hell if I know. First time I see such a thing. 

 Maybe you aren't running as root (The '%' prompt is suspicious).

I am running as root, the % prompt is the obfuscation part (I have
configured bash to display IP as part of prompt).

 Maybe the kernel has  been told to forget about the partitions of
 /dev/sdb.

But fdisk/cfdisk has no problem whatsoever finding the partitions .

 mdadm will sometimes tell it to do that, but only if you try to
 assemble arrays out of whole components.

 If that is the problem, then
blockdev --rereadpt /dev/sdb

I deleted LVM devices that were sitting on top of RAID and reinstalled mdadm.

% blockdev --rereadpt /dev/sdf
BLKRRPART: Device or resource busy

% mdadm /dev/md2 --fail /dev/sdf1
mdadm: set /dev/sdf1 faulty in /dev/md2

% blockdev --rereadpt /dev/sdf
BLKRRPART: Device or resource busy

% mdadm /dev/md2 --remove /dev/sdf1
mdadm: hot remove failed for /dev/sdf1: Device or resource busy

lsof /dev/sdf1 gives ZERO results.

arrrRRRGH

Regards,
Marcin Krol
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-06 Thread Marcin Krol
Tuesday 05 February 2008 12:43:31 Moshe Yudkowsky napisał(a):

  1. Where this info on array resides?! I have deleted /etc/mdadm/mdadm.conf 
  and /dev/md devices and yet it comes seemingly out of nowhere.

 /boot has a copy of mdadm.conf so that / and other drives can be started 
 and then mounted. update-initramfs will update /boot's copy of mdadm.conf.

Yeah, I found that while deleting mdadm package...

Thanks for answers everyone anyway.

Regards,
Marcin Krol


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-06 Thread Marcin Krol
Wednesday 06 February 2008 11:11:51 Peter Rabbitson napisał(a):
  lsof /dev/sdf1 gives ZERO results.
  
 
 What does this say:
 
   dmsetup table


% dmsetup table
vg-home: 0 61440 linear 9:2 384

Regards,
Marcin Krol
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-06 Thread Marcin Krol
Wednesday 06 February 2008 12:22:00:
  I have had a problem with RAID array (udev messed up disk names, I've had 
  RAID on
  disks only, without raid partitions)
 
 Do you mean that you originally used /dev/sdb for the RAID array? And now you
 are using /dev/sdb1?

That's reconfigured now, it doesn't matter (started up the host in single user, 
created
partitions as opposed to running RAID previously on whole disks).
 
 Given the system seems confused I wonder if this may be relevant?

I don't think so, I tried most mdadm operations (fail, remove, etc) on disks 
(like sdb) and 
partitions (like sdb1) and get identical messages for either.


-- 
Marcin Krol

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Deleting mdadm RAID arrays

2008-02-06 Thread Marcin Krol
{ID_SERIAL_SHORT}==WD-WMAMY1707974-part1, NAME=d_5p1

KERNEL==sd*, SUBSYSTEM==block, ENV{ID_SERIAL_SHORT}==WD-WMAMY1696130, 
NAME=d_6
KERNEL==sd*, SUBSYSTEM==block, 
ENV{ID_SERIAL_SHORT}==WD-WMAMY1696130-part1, NAME=d_6p1


/etc/udev/rules.d % cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active(auto-read-only) raid5 sdc1[0] sde1[3](S) sdd1[1]
  781417472 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

md1 : active(auto-read-only) raid5 sdf1[0] sdb1[3](S) sda1[1]
  781417472 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

md0 consists of sdc1, sde1 and sdd1 even though when creating I asked it to 
use d_1, d_2 and d_3 (this is probably written on the particular disk/partition 
itself,
but I have no idea how to clean this up - mdadm --zero-superblock /dev/d_1
again produces mdadm: Couldn't open /dev/d_1 for write - not zeroing)


/etc/mdadm % mdadm -Q --detail /dev/md0
/dev/md0:
Version : 00.90.03
  Creation Time : Wed Feb  6 12:24:49 2008
 Raid Level : raid5
 Array Size : 781417472 (745.22 GiB 800.17 GB)
  Used Dev Size : 390708736 (372.61 GiB 400.09 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Wed Feb  6 12:34:00 2008
  State : clean, degraded
 Active Devices : 2
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 64K

   UUID : f83e3541:b5b63f10:a6d4720f:52a5051f
 Events : 0.14

Number   Major   Minor   RaidDevice State
   0   8   330  active sync   /dev/d_1
   1   8   491  active sync   /dev/d_2
   2   002  removed

   3   8   65-  spare   /dev/d_3




-- 
Marcin Krol

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Deleting mdadm RAID arrays

2008-02-05 Thread Marcin Krol
Hello everyone,

I have had a problem with RAID array (udev messed up disk names, I've had RAID 
on
disks only, without raid partitions) on Debian Etch server with 6 disks and so 
I decided 
to rearrange this. 

Deleted the disks from (2 RAID-5) arrays, deleted the md* devices from /dev,
created /dev/sd[a-f]1 Linux raid auto-detect partitions and rebooted the host.

Now the mdadm startup script is writing in loop a message like mdadm: warning: 
/dev/sda1 and 
/dev/sdb1 have similar superblocks. If they are not identical, --zero the 
superblock ... 

The host can't boot up now because of this.

If I boot the server with some disks, I can't even zero that superblock:

% mdadm --zero-superblock /dev/sdb1
mdadm: Couldn't open /dev/sdb1 for write - not zeroing

It's the same even after:

% mdadm --manage /dev/md2 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md2


Now, I have NEVER created /dev/md2 array, yet it show up automatically!

% cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md2 : active(auto-read-only) raid1 sdb1[1]
  390708736 blocks [3/1] [_U_]

md1 : inactive sda1[2]
  390708736 blocks

unused devices: none


Questions:

1. Where this info on array resides?! I have deleted /etc/mdadm/mdadm.conf 
and /dev/md devices and yet it comes seemingly out of nowhere.

2. How can I delete that damn array so it doesn't hang my server up in a loop?


-- 
Marcin Krol

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html