Re: libata hotplug and md raid?

2007-01-10 Thread Mike Accetta

I am currently looking at using md RAID1 and libata hotplug under 2.6.19.
This relevant thread from Oct 2006

http://thread.gmane.org/gmane.linux.raid/13321/focus=13321

tailed off after this proposal from Neil Brown:

 On Monday October 16, [EMAIL PROTECTED] wrote:
   So the question remains: How will hotplug and md work together?
   
   How does md and hotplug work together for current hotplug devices?
  
  I have the same questions.
  
  How does this work in a pure SCSI environment? (has it been tested?)
  If something should change, should those changes be in the MD layer?
  Or can this *really* all be done nicely from userspace?  How?
 
 I would imagine that device removal would work like this:
  1/  you unplug the device
  2/ kernel notices and generates an unplug event to udev.
  3/ Udev does all the work to try to disconnect the device:
  force unmount (though that doesn't work for most filesystems)
  remove from dm
  remove from md (mdadm /dev/mdwhatever --fail /dev/dead --remove 
 /dev/dead)
  4/ Udev removes the node from /dev.
 
 udev can find out what needs to be done by looking at
 /sys/block/whatever/holders. 
 
 I don't know exactly how to get udev to do this, or whether there
 would be 'issues' in getting it to work reliably.  However if anyone
 wants to try I'm happy to help out where I can.
 
 NeilBrown

Not seeing any subsequent reports on the list, I decided to try
implementing the proposed approach.  The immdiate problem I ran into
was that /sys appears to have been cleaned up before udev sees the
remove event and the /sys/block/whatever/holders file is no longer
even around to consult at that point.  As a secondary problem, the
/dev/dead file is also apparently removed by udev before any programs
mentioned in removal rules get a chance to run so there is no longer any
device to provide to mdadm to remove at the time the program does run,
even if it had been possible to find out what md files were holders of
the removed block device to begin with.  Do I have the details right?
Any new thoughts in the last few months about how it would be best to
solve this problem?
--
Mike Accetta

ECI Telecom Ltd.
Data Networking Division (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-10-17 Thread Gabor Gombas
On Tue, Oct 17, 2006 at 11:58:03AM +1000, Neil Brown wrote:

 udev can find out what needs to be done by looking at
 /sys/block/whatever/holders. 

Are you sure?

$ cat /proc/mdstat
[...]
md0 : active raid1 sdd1[1] sdc1[0] sdb1[2] sda1[3]
  393472 blocks [4/4] []
[...]
$ ls -l /sys/block/sda/holders
total 0

Vanilla 2.6.18 kernel. In fact, all the /sys/block/*/holders directories
are empty here.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-10-17 Thread Gabor Gombas
On Tue, Oct 17, 2006 at 10:07:07AM +0200, Gabor Gombas wrote:

 Vanilla 2.6.18 kernel. In fact, all the /sys/block/*/holders directories
 are empty here.

Never mind, I just found the per-partition holders directories. Argh.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-10-16 Thread Mark Lord

Leon Woestenberg wrote:

Hello all,

On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote:

Ric Wheeler wrote:
 Leon Woestenberg wrote:
 In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
 each driven by libata ahci. I unplug then replug the drive that is
 rebuilding in RAID-5.

 When I unplug a drive, /dev/sda is removed, hotplug seems to work to
 the point where proc/mdstat shows the drive failed, but not removed.

Yeap, that sounds about right.

 Every other notion of the drive (in kernel and udev /dev namespace)
 seems to be gone after unplugging. I cannot manually removed the drive
 using mdadm, because it tells me the drive does not exist.

I see.  That's a problem.  Can you use /dev/.static/dev/sda instead?  If
you can't find those static nodes, just create one w/ 'mknod
my-static-sda b 8 0' and use it.



I did further testing of the ideas set out in this thread.

Although I can use (1) static device nodes, or (2) persistent naming
with the proper udev rules, each has its own kind of problems with md.

As long as the kernel announces drives as disappeared but md still
holds a lock, replugging drives will map to other major:minor number
no matter what I try in userspace.

Static device nodes will therefore not help me select the drive that
was unplugged/plugged per se.

Persistent naming using udev works OK (I used /dev/bay0 through
/dev/bay3 to pinpoint the drive bays) but these disappear upon
unplugging, while md keeps a lock to the major:minor, so replugging
will move it to different major:minor numbers.

So the question remains: How will hotplug and md work together?

How does md and hotplug work together for current hotplug devices?


I have the same questions.

How does this work in a pure SCSI environment? (has it been tested?)
If something should change, should those changes be in the MD layer?
Or can this *really* all be done nicely from userspace?  How?

I've got to fix some problems related to this, for a couple of clients,
and would like to Do It Right, or as close to Right as reality permits.

Cheers
--
Mark Lord
Real-Time Remedies Inc.
[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-10-16 Thread Neil Brown
On Monday October 16, [EMAIL PROTECTED] wrote:
  So the question remains: How will hotplug and md work together?
  
  How does md and hotplug work together for current hotplug devices?
 
 I have the same questions.
 
 How does this work in a pure SCSI environment? (has it been tested?)
 If something should change, should those changes be in the MD layer?
 Or can this *really* all be done nicely from userspace?  How?

I would imagine that device removal would work like this:
 1/  you unplug the device
 2/ kernel notices and generates an unplug event to udev.
 3/ Udev does all the work to try to disconnect the device:
 force unmount (though that doesn't work for most filesystems)
 remove from dm
 remove from md (mdadm /dev/mdwhatever --fail /dev/dead --remove /dev/dead)
 4/ Udev removes the node from /dev.

udev can find out what needs to be done by looking at
/sys/block/whatever/holders. 

I don't know exactly how to get udev to do this, or whether there
would be 'issues' in getting it to work reliably.  However if anyone
wants to try I'm happy to help out where I can.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-15 Thread Leon Woestenberg

Hello all,

On 9/15/06, Greg KH [EMAIL PROTECTED] wrote:

On Thu, Sep 14, 2006 at 02:24:45PM +0200, Leon Woestenberg wrote:
 On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote:
 Ric Wheeler wrote:
  Leon Woestenberg wrote:
  In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
  each driven by libata ahci. I unplug then replug the drive that is
  rebuilding in RAID-5.
 ...
 So the question remains: How will hotplug and md work together?
 ...
 How does md and hotplug work together for current hotplug devices?

The answer to both of these questions is, not very well.  Me and Kay
have been talking with Neil Brown about this and he agrees that it needs
to be fixed up.  That md device needs to have proper lifetime rules and
go away proper.  Hopefully it gets fixed soon.



I will try to catch any kernel work on this so that I can pick it up
for testing.

For the moment, I'll try to make this work as best as possible using
udev rules and userspace (mdadm). I suppose I can act on both unplugs
and plugs, both before and after the event, is that true?

Regards,

Leon Woestenberg.

--
Leon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-14 Thread Turbo Fredriksson
 Tejun == Tejun Heo [EMAIL PROTECTED] writes:

Tejun Would it be better for md to listen to
Tejun hotplug events and auto-remove dead devices or is it
Tejun something which belongs to userland?

From my perspective (User+Admin), I'd _very much_ like to have
(physically) removed disks be removed by md.

This would greatly help me when a disk fails on any of my systems.
They are all SPARC's (with a few x86). None of which have any
monitor attached. The x86's have, but that monitor is a couple of
hundred meters away...

So when I change drive, I first have to telnet into the terminal
switch port for that machine, do the mdadm commands. Then physically
change the drive. Then back to a machine and telnet back in to the
machine and hot-add the disk

Granted, it don't take that much time, but it's a couple of extra
steps (literally :) that I'd prefer not to do/take...
-- 
supercomputer toluene Mossad Semtex assassination Noriega Rule Psix
cryptographic critical NORAD terrorist killed fissionable Marxist
genetic
[See http://www.aclu.org/echelonwatch/index.html for more about this]
[Or http://www.europarl.eu.int/tempcom/echelon/pdf/rapport_echelon_en.pdf]
If neither of these works, try http://www.aclu.org and search for echelon.
Note. This is a real, not fiction.
http://www.theregister.co.uk/2001/09/06/eu_releases_echelon_spying_report/
http://www.aclu.org/safefree/nsaspying/23989res20060131.html#echelon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-14 Thread Leon Woestenberg

Hello all,

On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote:

Ric Wheeler wrote:
 Leon Woestenberg wrote:
 In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
 each driven by libata ahci. I unplug then replug the drive that is
 rebuilding in RAID-5.

 When I unplug a drive, /dev/sda is removed, hotplug seems to work to
 the point where proc/mdstat shows the drive failed, but not removed.

Yeap, that sounds about right.

 Every other notion of the drive (in kernel and udev /dev namespace)
 seems to be gone after unplugging. I cannot manually removed the drive
 using mdadm, because it tells me the drive does not exist.

I see.  That's a problem.  Can you use /dev/.static/dev/sda instead?  If
you can't find those static nodes, just create one w/ 'mknod
my-static-sda b 8 0' and use it.



I did further testing of the ideas set out in this thread.

Although I can use (1) static device nodes, or (2) persistent naming
with the proper udev rules, each has its own kind of problems with md.

As long as the kernel announces drives as disappeared but md still
holds a lock, replugging drives will map to other major:minor number
no matter what I try in userspace.

Static device nodes will therefore not help me select the drive that
was unplugged/plugged per se.

Persistent naming using udev works OK (I used /dev/bay0 through
/dev/bay3 to pinpoint the drive bays) but these disappear upon
unplugging, while md keeps a lock to the major:minor, so replugging
will move it to different major:minor numbers.

So the question remains: How will hotplug and md work together?

How does md and hotplug work together for current hotplug devices?

Regards,

Leon.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-14 Thread Greg KH
On Thu, Sep 14, 2006 at 02:24:45PM +0200, Leon Woestenberg wrote:
 Hello all,
 
 On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote:
 Ric Wheeler wrote:
  Leon Woestenberg wrote:
  In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
  each driven by libata ahci. I unplug then replug the drive that is
  rebuilding in RAID-5.
 
  When I unplug a drive, /dev/sda is removed, hotplug seems to work to
  the point where proc/mdstat shows the drive failed, but not removed.
 
 Yeap, that sounds about right.
 
  Every other notion of the drive (in kernel and udev /dev namespace)
  seems to be gone after unplugging. I cannot manually removed the drive
  using mdadm, because it tells me the drive does not exist.
 
 I see.  That's a problem.  Can you use /dev/.static/dev/sda instead?  If
 you can't find those static nodes, just create one w/ 'mknod
 my-static-sda b 8 0' and use it.
 
 
 I did further testing of the ideas set out in this thread.
 
 Although I can use (1) static device nodes, or (2) persistent naming
 with the proper udev rules, each has its own kind of problems with md.
 
 As long as the kernel announces drives as disappeared but md still
 holds a lock, replugging drives will map to other major:minor number
 no matter what I try in userspace.
 
 Static device nodes will therefore not help me select the drive that
 was unplugged/plugged per se.
 
 Persistent naming using udev works OK (I used /dev/bay0 through
 /dev/bay3 to pinpoint the drive bays) but these disappear upon
 unplugging, while md keeps a lock to the major:minor, so replugging
 will move it to different major:minor numbers.
 
 So the question remains: How will hotplug and md work together?
 
 How does md and hotplug work together for current hotplug devices?

The answer to both of these questions is, not very well.  Me and Kay
have been talking with Neil Brown about this and he agrees that it needs
to be fixed up.  That md device needs to have proper lifetime rules and
go away proper.  Hopefully it gets fixed soon.

thanks,

greg k-h
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-13 Thread Ric Wheeler

(Adding Tejun  Greg KH to this thread)

Leon Woestenberg wrote:


Hello all,

I am testing the (work-in-progress / upcoming) libata SATA hotplug.
Hotplugging alone seems to work, but not well in combination with md
RAID.

Here is my report and a question about intended behaviour. Mainstream
2.6.17.11 kernel patched with libata-tj-2.6.17.4-20060710.tar.bz2 from
http://home-tj.org/files/libata-tj-stable/.

Supermicro P8SCT motherboard with Intel ICH6R, using AHCI libata driver.

In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
each driven by libata ahci. I unplug then replug the drive that is
rebuilding in RAID-5.

When I unplug a drive, /dev/sda is removed, hotplug seems to work to
the point where proc/mdstat shows the drive failed, but not removed.

Every other notion of the drive (in kernel and udev /dev namespace)
seems to be gone after unplugging. I cannot manually removed the drive
using mdadm, because it tells me the drive does not exist.

Replugging the drive brings it back as /dev/sde, md0 will not pick it up.


I have a similar setup, AHCI + 4 drives but using a RAID-1 group.  The 
thing that you are looking for is persistent device naming and should 
work properly if you can tweak udev/hotplug correctly.


I have verified that a drive pull/drive reinsert on a mainline kernel 
with a SLES10 base does provide this (first insertion gives me sdb, pull 
followed by reinsert still is sdb), but have not tested interaction with 
RAID since I am focused on the bad block handling at the moment.  I will 
add this to my list ;-)




The expected behaviour (from me) is that the drive re-appears as 
/dev/sda.


What is the intended behaviour of md in this case?

Should some user-space application fail-remove a drive as a pre-action
of the unplug event from udev, or should md fully remove the drive
within kernel space??

See kernel/udev/userspace messages in chronological order,
with my actions marked between  , at this web
page:

http://pastebin.ca/168798

Thanks,
--
Leon



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-13 Thread Tejun Heo

Ric Wheeler wrote:

(Adding Tejun  Greg KH to this thread)

Adding linux-ide to this thread.


Leon Woestenberg wrote:

[--snip--]

In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
each driven by libata ahci. I unplug then replug the drive that is
rebuilding in RAID-5.

When I unplug a drive, /dev/sda is removed, hotplug seems to work to
the point where proc/mdstat shows the drive failed, but not removed.


Yeap, that sounds about right.


Every other notion of the drive (in kernel and udev /dev namespace)
seems to be gone after unplugging. I cannot manually removed the drive
using mdadm, because it tells me the drive does not exist.


I see.  That's a problem.  Can you use /dev/.static/dev/sda instead?  If 
you can't find those static nodes, just create one w/ 'mknod 
my-static-sda b 8 0' and use it.



Replugging the drive brings it back as /dev/sde, md0 will not pick it up.


No, it won't.

I have a similar setup, AHCI + 4 drives but using a RAID-1 group.  The 
thing that you are looking for is persistent device naming and should 
work properly if you can tweak udev/hotplug correctly.


I have verified that a drive pull/drive reinsert on a mainline kernel 
with a SLES10 base does provide this (first insertion gives me sdb, pull 
followed by reinsert still is sdb), but have not tested interaction with 
RAID since I am focused on the bad block handling at the moment.  I will 
add this to my list ;-)




The expected behaviour (from me) is that the drive re-appears as 
/dev/sda.


Apart from persistent naming Ric mentioned above, the reason why you 
don't get sda back is md is holding the internal device.  It's removed 
from all visible name spaces but md still holds a reference, so the 
device cannot be destroyed.  So, when a new device comes along, sda is 
occupied by the dead device, and the new one gets the next available 
slot, which happens to be sde in your case.



What is the intended behaviour of md in this case?

Should some user-space application fail-remove a drive as a pre-action
of the unplug event from udev, or should md fully remove the drive
within kernel space??


I'm curious too.  Would it be better for md to listen to hotplug events 
and auto-remove dead devices or is it something which belongs to userland?


Thanks.

--
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libata hotplug and md raid?

2006-09-13 Thread Leon Woestenberg

Hello Tejun et al,

On 9/13/06, Tejun Heo [EMAIL PROTECTED] wrote:

Ric Wheeler wrote:
 (Adding Tejun  Greg KH to this thread)
Adding linux-ide to this thread.

 Leon Woestenberg wrote:
[--snip--]
 In short, I use ext3 over /dev/md0 over 4 SATA drives /dev/sd[a-d]
 each driven by libata ahci. I unplug then replug the drive that is
 rebuilding in RAID-5.

 When I unplug a drive, /dev/sda is removed, hotplug seems to work to
 the point where proc/mdstat shows the drive failed, but not removed.

Yeap, that sounds about right.


I suppose this is 'right', but only if we think of a hot unplugged
device as a failing device.

As in most cases we cannot tell if the hot unplug was intentional or
not (because we see a device disappearing from the phy and we have no
other sensory data available), assuming the drive 'fails' seems
reasonable.


 Every other notion of the drive (in kernel and udev /dev namespace)
 seems to be gone after unplugging. I cannot manually removed the drive
 using mdadm, because it tells me the drive does not exist.

I see.  That's a problem.  Can you use /dev/.static/dev/sda instead?  If
you can't find those static nodes, just create one w/ 'mknod
my-static-sda b 8 0' and use it.


Yes, that works.

Also, replugging brings back the device as /dev/sda, indicating md is
no longer holding the internal lock.


Apart from persistent naming Ric mentioned above, the reason why you
don't get sda back is md is holding the internal device.  It's removed
from all visible name spaces but md still holds a reference, so the
device cannot be destroyed.


To me, this seems a bug, as the kernel already told everyone else
(userland) that it thinks the device is no longer there.

This contradicts the fact that the kernel itself has dangling references to it.


So, when a new device comes along, sda is
occupied by the dead device, and the new one gets the next available
slot, which happens to be sde in your case.

 What is the intended behaviour of md in this case?

 Should some user-space application fail-remove a drive as a pre-action
 of the unplug event from udev, or should md fully remove the drive
 within kernel space??

I'm curious too.  Would it be better for md to listen to hotplug events
and auto-remove dead devices or is it something which belongs to userland?


...also considering race conditions between userland and kernel in that case...

My first thoughts would be that a unplugged device should be handled
differently than a device that failed in other senses, or at least
this should be considered by the kernel developers.

Thanks for the response so far, regards,
--
Leon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html