Re: raid issues after power failure

2006-07-02 Thread Francois Barre

2006/7/1, Ákos Maróy [EMAIL PROTECTED]:

Neil Brown wrote:
 Try adding '--force' to the -A line.
 That tells mdadm to try really hard to assemble the array.

thanks, this seems to have solved the issue...


Akos




Well, Neil, I'm wondering,
It seemed to me that Akos' description of the problem was that
re-adding the drive (with mdadm not complaining about anything) would
trigger a resync that would not even start.
But as your '--force' does the trick, it implies that the resync was
not really triggered after all without it... Or did I miss a bit of
log Akos provided that did say so ?
Could there be a place here for an error message ?

More generally, could it be usefull to build up a recovery howto,
based on the experiences on this list (I guess 90% of the posts a
related to recoveries) ?
Not in terms of a standard disk loss, but in terms of a power failure
or a major disk problem. You know, re-creating the array, rolling the
dices, and *tada !* your data is back again... I could not find a bit
of doc about this.

Regards,
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid issues after power failure

2006-07-02 Thread David Greaves
Francois Barre wrote:
 2006/7/1, Ákos Maróy [EMAIL PROTECTED]:
 Neil Brown wrote:
  Try adding '--force' to the -A line.
  That tells mdadm to try really hard to assemble the array.

 thanks, this seems to have solved the issue...


 Akos


 
 Well, Neil, I'm wondering,
 It seemed to me that Akos' description of the problem was that
 re-adding the drive (with mdadm not complaining about anything) would
 trigger a resync that would not even start.
 But as your '--force' does the trick, it implies that the resync was
 not really triggered after all without it... Or did I miss a bit of
 log Akos provided that did say so ?
 Could there be a place here for an error message ?
 
 More generally, could it be usefull to build up a recovery howto,
 based on the experiences on this list (I guess 90% of the posts a
 related to recoveries) ?
 Not in terms of a standard disk loss, but in terms of a power failure
 or a major disk problem. You know, re-creating the array, rolling the
 dices, and *tada !* your data is back again... I could not find a bit
 of doc about this.
 

Francois,
I have started to put a wiki in place here:
  http://linux-raid.osdl.org/

My reasoning was *exactly* that - there is reference information for md
but sometimes the incantations need a little explanation and often the
diagnostics are not obvious...

I've been subscribed to linux-raid since the middle of last year and
I've been going through old messages looking for nuggets to base some
docs around.

I haven't had a huge amount of time recently so I've just scribbled on
it for now - I wanted to present something a little more polished to the
community - but since you're asking...

So don't consider this an official announcement of a useable work yet -
more a 'Please contact me if you would like to contribute' (just so I
can keep track of interested parties) and we can build something up...

David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


raid5 write performance

2006-07-02 Thread Raz Ben-Jehuda(caro)

Neil hello.

I have been looking at the raid5 code trying to understand why writes
performance is so poor.
If I am not mistaken here, It seems that you issue a write in size of
one page an no more no matter what buffer size I am using .

1. Is this page is directed only to parity disk ?
2. How can i increase the write throughout ?


Thank you
--
Raz
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid5 write performance

2006-07-02 Thread Neil Brown
On Sunday July 2, [EMAIL PROTECTED] wrote:
 Neil hello.
 
 I have been looking at the raid5 code trying to understand why writes
 performance is so poor.

raid5 write performance is expected to be poor, as you often need to
pre-read data or parity before the write can be issued.

 If I am not mistaken here, It seems that you issue a write in size of
 one page an no more no matter what buffer size I am using .

I doubt the small write size would contribute more than a couple of
percent to the speed issue.  Scheduling (when to write, when to
pre-read, when to wait a moment) is probably much more important.

 
 1. Is this page is directed only to parity disk ?

No.  All drives are written with one page units.  Each request is
divided into one-page chunks, these one page chunks are gathered -
where possible - into strips, and the strips are handled as units
(Where a strip is like a stripe, only 1 page wide rather then one chunk
wide - if that makes sense).

 2. How can i increase the write throughout ?

Look at scheduling patterns - what order are the blocks getting
written, do we pre-read when we don't need to, things like that.

The current code tries to do the right thing, and it certainly has
been worse in the past, but I wouldn't be surprised if it could still
be improved.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] enable auto=yes by default when using udev

2006-07-02 Thread Luca Berra

Hello,
the following patch aims at solving an issue that is confusing a lot of
users.
when using udev, device files are created only when devices are
registered with the kernel, and md devices are registered only when
started.
mdadm needs the device file _before_ starting the array.
so when using udev you must add --auto=yes to the mdadm commandline or
to the ARRAY line in mdadm.conf

following patch makes auto=yes the default when using udev

L.


--
Luca Berra -- [EMAIL PROTECTED]
   Communication Media  Services S.r.l.
/\
\ / ASCII RIBBON CAMPAIGN
 XAGAINST HTML MAIL
/ \
* Sat Jun 24 2006 Luca Berra [EMAIL PROTECTED]
- automatically create devices if using udev

--- mdadm-2.5.1/mdadm.c.autof   2006-06-02 01:51:01.0 -0400
+++ mdadm-2.5.1/mdadm.c 2006-06-24 05:17:45.0 -0400
@@ -857,6 +857,13 @@
fputs(Usage, stderr);
exit(2);
}
+
+   /* if we are using udev and auto is not set, mdadm will almost
+* certainly fail, so we force it here.
+*/
+   if (autof == 0  access(/dev/.udevdb,F_OK) == 0)
+   autof=2;
+
/* Ok, got the option parsing out of the way
 * hopefully it's mostly right but there might be some stuff
 * missing
@@ -873,7 +880,7 @@
fprintf(stderr, Name : an md device must be given in 
this mode\n);
exit(2);
}
-   if ((int)ident.super_minor == -2  autof) {
+   if ((int)ident.super_minor == -2  autof  2 ) {
fprintf(stderr, Name : --super-minor=dev is 
incompatible with --auto\n);  
exit(2);
}


Re: [PATCH] enable auto=yes by default when using udev

2006-07-02 Thread Neil Brown
On Monday July 3, [EMAIL PROTECTED] wrote:
 Hello,
 the following patch aims at solving an issue that is confusing a lot of
 users.
 when using udev, device files are created only when devices are
 registered with the kernel, and md devices are registered only when
 started.
 mdadm needs the device file _before_ starting the array.
 so when using udev you must add --auto=yes to the mdadm commandline or
 to the ARRAY line in mdadm.conf
 
 following patch makes auto=yes the default when using udev
 

The principle I'm reasonably happy with, though you can now make this
the default with a line like

  CREATE auto=yes
in mdadm.conf.

However

 +
 + /* if we are using udev and auto is not set, mdadm will almost
 +  * certainly fail, so we force it here.
 +  */
 + if (autof == 0  access(/dev/.udevdb,F_OK) == 0)
 + autof=2;
 +

I'm worried that this test is not very robust.
On my Debian/unstable system running used, there is no
 /dev/.udevdb
though there is a
 /dev/.udev/db

I guess I could test for both, but then udev might change
again I'd really like a more robust check.

Maybe I could test if /dev was a mount point?

Any other ideas?

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] enable auto=yes by default when using udev

2006-07-02 Thread Jason Lunz
[EMAIL PROTECTED] said:
 Maybe I could test if /dev was a mount point?
 Any other ideas?

there's a udevd you can check for. I don't know whether that's a better
test or not.

Jason

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html