Re: Raid 10 question/problem [ot]

2007-01-29 Thread Bill Davidsen

Michael Tokarev wrote:

Bill Davidsen wrote:
[]

RAID-10 is not the same as RAID 0+1.


It is.  Yes, there's separate module for raid10, but what it - basically -
does is the same as raid0 module over two raid1 modules will do.  It's
just a bit more efficient (less levels, more room for optimisations),
easy to use (you'll have single array instead of at least 3), and a bit
more flexible;  at the same way it's less widely tested...

But the end result is basically the same for both ways.

For values of "same" which exclude consideration of the disk layout, 
throughput, overhead, system administration, and use of spares. Those 
are different. But both methods do write multiple copies of ones and 
zeros to storage media.


Neil brown, 08/23/2005:
- A raid10 can consist of an odd number of drives (if you have a
   cabinet with, say, 8 slots, you can have 1 hot spare, and 7 drives
   in a raid10.  You cannot do that with LVM (or raid0) over raid1).
 - raid10 has a layout ('far') which theoretically can provide
   sequential read throughput that scales by number of drives, rather
   than number of raid1 pairs.  I say 'theoretically' because I think
   there are still issues with the read-balancing code that make this
   hard to get in practice (though increasing the read-ahead seems to
   help).


After about 40 configurations tested, I can say that write performance 
is better as well, for any given stripe cache size up to 4x stripe size. 
I was looking at something else, but the numbers happen to be available.


--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-29 Thread Bill Davidsen

Michael Tokarev wrote:

Bill Davidsen wrote:
[]

RAID-10 is not the same as RAID 0+1.


It is.  Yes, there's separate module for raid10, but what it - basically -
does is the same as raid0 module over two raid1 modules will do.  It's
just a bit more efficient (less levels, more room for optimisations),
easy to use (you'll have single array instead of at least 3), and a bit
more flexible;  at the same way it's less widely tested...

But the end result is basically the same for both ways.

For values of same which exclude consideration of the disk layout, 
throughput, overhead, system administration, and use of spares. Those 
are different. But both methods do write multiple copies of ones and 
zeros to storage media.


Neil brown, 08/23/2005:
- A raid10 can consist of an odd number of drives (if you have a
   cabinet with, say, 8 slots, you can have 1 hot spare, and 7 drives
   in a raid10.  You cannot do that with LVM (or raid0) over raid1).
 - raid10 has a layout ('far') which theoretically can provide
   sequential read throughput that scales by number of drives, rather
   than number of raid1 pairs.  I say 'theoretically' because I think
   there are still issues with the read-balancing code that make this
   hard to get in practice (though increasing the read-ahead seems to
   help).


After about 40 configurations tested, I can say that write performance 
is better as well, for any given stripe cache size up to 4x stripe size. 
I was looking at something else, but the numbers happen to be available.


--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 22:44, Michael Tokarev wrote:
>>> Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
>> 
>> Note that `mdadm -As` _is_ run on FC6 boot.
>
>See above -- man mdadm, search for --auto.  -A = --assemble, -s = --scan.

Oops, thank you. So

  mdadm -A -s --auto=yes

did the right thing, and probably is what was intended. And now for
tonight's 1000$ question: why does fedora don't do that, making me edit
the mdadm -A -s line in /etc/rc.d/rc.sysvinit?


-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 22:49, Michael Tokarev wrote:
>Bill Davidsen wrote:
>[]
>> RAID-10 is not the same as RAID 0+1.
>
>It is.  Yes, there's separate module for raid10, but what it - basically -
>does is the same as raid0 module over two raid1 modules will do.  It's
>just a bit more efficient (less levels, more room for optimisations),
>easy to use (you'll have single array instead of at least 3), and a bit
>more flexible;  at the same way it's less widely tested...

And most importantly, raid10 allows you to spread the array data over an odd
number of devices while still having [at least] 2 copies of each block.

Hm, I really wished resizing was implemented for raid0 and raid10 too... ;)


-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Bill Davidsen wrote:
[]
> RAID-10 is not the same as RAID 0+1.

It is.  Yes, there's separate module for raid10, but what it - basically -
does is the same as raid0 module over two raid1 modules will do.  It's
just a bit more efficient (less levels, more room for optimisations),
easy to use (you'll have single array instead of at least 3), and a bit
more flexible;  at the same way it's less widely tested...

But the end result is basically the same for both ways.

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Jan Engelhardt wrote:
> On Jan 28 2007 12:05, Michael Tokarev wrote:
[]
>> Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
> 
> Note that `mdadm -As` _is_ run on FC6 boot.

See above -- man mdadm, search for --auto.  -A = --assemble, -s = --scan.

>> In order for an md array to be started up on boot, it has to be specified
>> in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.
> 
> That's how it is, and it does not work.

Sure.  Because of this missing --auto flag.

> openSUSE 10.2:
> no mdadm.conf _at all_, /etc/init.d/boot.d/boot.md is chkconfig'ed _out_,
> _no_ md kernel module is loaded, and I still have all the /dev/md nodes.

And no udev.  Or, alternatively, *all* md devices are created by some other
script or somesuch.  There's No Magic (tm).

[]
> # mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
> mdadm: error opening /dev/md1: No such file or directory
> 
> Showstopper.

Nonsense.  See above again, man mdadm, search for --auto.

[]
> You see, I have all the reason to be confused.

Yeah, this is quite... confusing.

It's all due to the way how mdadm iteracts with the kernel and how
udev works - all together.  The thing is - in order to assemble an
array, proper device node has to be in place.  But udev wont create
it until array is assembled.  Chicken problem.

Exactly due to this, the node(s) can be created with mdadm (with --auto
option, whicih can be specified in mdadm.conf too), AND/OR with some
startup script before invoking mdadm, AND/OR when the system isn't
broken by udevd (with ol'good static /dev).

>> But in any case, this has exactly nothing to do with kernel.
>> It's 100% userspace issues, I'd say distribution-specific issues.

..And each distribution uses its own kludge/workaround/solution for
this stuff - which you demonstrated... ;)

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Bill Davidsen

Marc Perkel wrote:

I'm a little stumped trying to set up raid 10. I set
it up and it worked but after a reboot it forgets my
raid setup.

Created 2 raid 1 arrays in md0 and md1 and that works
and survives a reboot.

However - I created a raid 0 on /dev/md2 made up of
/dev/md0 and /dev/md1 and it worked but it forgets it
after I reboot. The device /dev/md2 fails to survive a
reboot.

Created the /etc/mdadm.conf file but that doesn't seem
to have made a difference.

What am I missing? Thanks in advance.


RAID-10 is not the same as RAID 0+1.

There's a linux-raid mailing list, the archives may assist in 
understanding this. Either use RAID-10 or add md2 to the mdadm.conf to 
get it started at boot. I suggest using RAID-10.


--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 12:05, Michael Tokarev wrote:
>Jan Engelhardt wrote:
>> 
>> That's interesting. I am using Aurora Corona, and all but md0 vanishes.
>> (Reason for that is that udev does not create the nodes md1-md31 on
>> boot, so mdadm cannot assemble the arrays.)
>
>This is nonsense.
>
>Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
>Udev has exactly nothing to do with mdX nodes.

Note that `mdadm -As` _is_ run on FC6 boot.

>In order for an md array to be started up on boot, it has to be specified
>in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.

That's how it is, and it does not work.

openSUSE 10.2:
no mdadm.conf _at all_, /etc/init.d/boot.d/boot.md is chkconfig'ed _out_,
_no_ md kernel module is loaded, and I still have all the /dev/md nodes.

FC6 standard install:
no mdadm.conf, otherwise regular boot. /dev/md0 exists. Uhuh.

FC6 with two raids:

# fdisk -l
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1 123  987966   fd  Linux raid autodetect
/dev/sdb2 124 246  987997+  fd  Linux raid autodetect
/dev/sdb3 247 369  987997+  fd  Linux raid autodetect
/dev/sdb4 3701044 5421937+  fd  Linux raid autodetect

Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdc1   1 123  987966   fd  Linux raid autodetect
/dev/sdc2 124 246  987997+  fd  Linux raid autodetect
/dev/sdc3 247 369  987997+  fd  Linux raid autodetect
/dev/sdc4 3701044 5421937+  fd  Linux raid autodetect

# mdadm -C /dev/md0 -e 1.0 -l 1 -n 2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.
# mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: error opening /dev/md1: No such file or directory

Showstopper.

# mknod /dev/md1 b 9 1
# mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: array /dev/md1 started.
# cat /etc/mdadm.conf
cat: /etc/mdadm.conf: No such file or directory
# echo "DEVICE /dev/sd[a-z][0-9]" >/etc/mdadm.conf
# mdadm --detail --scan >>/etc/mdadm.conf
# cat /etc/mdadm.conf
DEVICE /dev/sd[a-z][0-9]
ARRAY /dev/md0 level=raid1 num-devices=2 name=0 
UUID=5ded6a11:3b9072f6:ae46efc7:d1628ea7
ARRAY /dev/md1 level=raid1 num-devices=2 name=1 
UUID=2fda5608:d63d8287:761a7a09:68fe743f
# reboot

...
Starting udev: [ OK ]
Loading default keymap (us): [ OK ]
Setting hostname fc6.site: [ OK ]
mdadm: /dev/md0 has been started with 2 drives.
mdadm: error opening /dev/md1: No such file or directory
No devices found
Setting up Logical Volume Management: No volume groups found [ OK ]
...

Now with "DEVICE partitions" in mdadm.conf:

mdadm: /dev/md0 has been started with 2 drives.
mdadm: error opening /dev/md1: No such file or directory

You see, I have all the reason to be confused.

>But in any case, this has exactly nothing to do with kernel.
>It's 100% userspace issues, I'd say distribution-specific issues.

At least I can agree.


-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Jan Engelhardt wrote:
> On Jan 27 2007 10:31, Marc Perkel wrote:
[]
>> Sorry about that. I'm using Fedora Core 6. /dev/md0
>> and /dev/md1, buth of which are raid 1 arrays survive
>> the reboot. But when I make a raid 0 out of those two
>> raid arrays that's what is vanishing.
> 
> That's interesting. I am using Aurora Corona, and all but md0 vanishes.
> (Reason for that is that udev does not create the nodes md1-md31 on
> boot, so mdadm cannot assemble the arrays.)

This is nonsense.

Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
Udev has exactly nothing to do with mdX nodes.

In order for an md array to be started up on boot, it has to be specified
in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.

If you're using raid0 on top of two raid1s (as opposed to using raid10
directly - which is more efficient and flexible), the DEVICE line in
mdadm.conf should be either `partitions' (usually preferred way), or -
in case direct device list is specified - should contain both real
disks AND the raid1 arrays.

But in any case, this has exactly nothing to do with kernel.
It's 100% userspace issues, I'd say distribution-specific issues.

/mjt
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Jan Engelhardt wrote:
 On Jan 27 2007 10:31, Marc Perkel wrote:
[]
 Sorry about that. I'm using Fedora Core 6. /dev/md0
 and /dev/md1, buth of which are raid 1 arrays survive
 the reboot. But when I make a raid 0 out of those two
 raid arrays that's what is vanishing.
 
 That's interesting. I am using Aurora Corona, and all but md0 vanishes.
 (Reason for that is that udev does not create the nodes md1-md31 on
 boot, so mdadm cannot assemble the arrays.)

This is nonsense.

Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
Udev has exactly nothing to do with mdX nodes.

In order for an md array to be started up on boot, it has to be specified
in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.

If you're using raid0 on top of two raid1s (as opposed to using raid10
directly - which is more efficient and flexible), the DEVICE line in
mdadm.conf should be either `partitions' (usually preferred way), or -
in case direct device list is specified - should contain both real
disks AND the raid1 arrays.

But in any case, this has exactly nothing to do with kernel.
It's 100% userspace issues, I'd say distribution-specific issues.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 12:05, Michael Tokarev wrote:
Jan Engelhardt wrote:
 
 That's interesting. I am using Aurora Corona, and all but md0 vanishes.
 (Reason for that is that udev does not create the nodes md1-md31 on
 boot, so mdadm cannot assemble the arrays.)

This is nonsense.

Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
Udev has exactly nothing to do with mdX nodes.

Note that `mdadm -As` _is_ run on FC6 boot.

In order for an md array to be started up on boot, it has to be specified
in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.

That's how it is, and it does not work.

openSUSE 10.2:
no mdadm.conf _at all_, /etc/init.d/boot.d/boot.md is chkconfig'ed _out_,
_no_ md kernel module is loaded, and I still have all the /dev/md nodes.

FC6 standard install:
no mdadm.conf, otherwise regular boot. /dev/md0 exists. Uhuh.

FC6 with two raids:

# fdisk -l
Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1 123  987966   fd  Linux raid autodetect
/dev/sdb2 124 246  987997+  fd  Linux raid autodetect
/dev/sdb3 247 369  987997+  fd  Linux raid autodetect
/dev/sdb4 3701044 5421937+  fd  Linux raid autodetect

Disk /dev/sdc: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdc1   1 123  987966   fd  Linux raid autodetect
/dev/sdc2 124 246  987997+  fd  Linux raid autodetect
/dev/sdc3 247 369  987997+  fd  Linux raid autodetect
/dev/sdc4 3701044 5421937+  fd  Linux raid autodetect

# mdadm -C /dev/md0 -e 1.0 -l 1 -n 2 /dev/sdb1 /dev/sdc1
mdadm: array /dev/md0 started.
# mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: error opening /dev/md1: No such file or directory

Showstopper.

# mknod /dev/md1 b 9 1
# mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
mdadm: array /dev/md1 started.
# cat /etc/mdadm.conf
cat: /etc/mdadm.conf: No such file or directory
# echo DEVICE /dev/sd[a-z][0-9] /etc/mdadm.conf
# mdadm --detail --scan /etc/mdadm.conf
# cat /etc/mdadm.conf
DEVICE /dev/sd[a-z][0-9]
ARRAY /dev/md0 level=raid1 num-devices=2 name=0 
UUID=5ded6a11:3b9072f6:ae46efc7:d1628ea7
ARRAY /dev/md1 level=raid1 num-devices=2 name=1 
UUID=2fda5608:d63d8287:761a7a09:68fe743f
# reboot

...
Starting udev: [ OK ]
Loading default keymap (us): [ OK ]
Setting hostname fc6.site: [ OK ]
mdadm: /dev/md0 has been started with 2 drives.
mdadm: error opening /dev/md1: No such file or directory
No devices found
Setting up Logical Volume Management: No volume groups found [ OK ]
...

Now with DEVICE partitions in mdadm.conf:

mdadm: /dev/md0 has been started with 2 drives.
mdadm: error opening /dev/md1: No such file or directory

You see, I have all the reason to be confused.

But in any case, this has exactly nothing to do with kernel.
It's 100% userspace issues, I'd say distribution-specific issues.

At least I can agree.


-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Bill Davidsen

Marc Perkel wrote:

I'm a little stumped trying to set up raid 10. I set
it up and it worked but after a reboot it forgets my
raid setup.

Created 2 raid 1 arrays in md0 and md1 and that works
and survives a reboot.

However - I created a raid 0 on /dev/md2 made up of
/dev/md0 and /dev/md1 and it worked but it forgets it
after I reboot. The device /dev/md2 fails to survive a
reboot.

Created the /etc/mdadm.conf file but that doesn't seem
to have made a difference.

What am I missing? Thanks in advance.


RAID-10 is not the same as RAID 0+1.

There's a linux-raid mailing list, the archives may assist in 
understanding this. Either use RAID-10 or add md2 to the mdadm.conf to 
get it started at boot. I suggest using RAID-10.


--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Jan Engelhardt wrote:
 On Jan 28 2007 12:05, Michael Tokarev wrote:
[]
 Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
 
 Note that `mdadm -As` _is_ run on FC6 boot.

See above -- man mdadm, search for --auto.  -A = --assemble, -s = --scan.

 In order for an md array to be started up on boot, it has to be specified
 in /etc/mdadm.conf.  With proper DEVICE line in there.  That's all.
 
 That's how it is, and it does not work.

Sure.  Because of this missing --auto flag.

 openSUSE 10.2:
 no mdadm.conf _at all_, /etc/init.d/boot.d/boot.md is chkconfig'ed _out_,
 _no_ md kernel module is loaded, and I still have all the /dev/md nodes.

And no udev.  Or, alternatively, *all* md devices are created by some other
script or somesuch.  There's No Magic (tm).

[]
 # mdadm -C /dev/md1 -e 1.0 -l 1 -n 2 /dev/sdb2 /dev/sdc2
 mdadm: error opening /dev/md1: No such file or directory
 
 Showstopper.

Nonsense.  See above again, man mdadm, search for --auto.

[]
 You see, I have all the reason to be confused.

Yeah, this is quite... confusing.

It's all due to the way how mdadm iteracts with the kernel and how
udev works - all together.  The thing is - in order to assemble an
array, proper device node has to be in place.  But udev wont create
it until array is assembled.  Chickeneggs problem.

Exactly due to this, the node(s) can be created with mdadm (with --auto
option, whicih can be specified in mdadm.conf too), AND/OR with some
startup script before invoking mdadm, AND/OR when the system isn't
broken by udevd (with ol'good static /dev).

 But in any case, this has exactly nothing to do with kernel.
 It's 100% userspace issues, I'd say distribution-specific issues.

..And each distribution uses its own kludge/workaround/solution for
this stuff - which you demonstrated... ;)

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Michael Tokarev
Bill Davidsen wrote:
[]
 RAID-10 is not the same as RAID 0+1.

It is.  Yes, there's separate module for raid10, but what it - basically -
does is the same as raid0 module over two raid1 modules will do.  It's
just a bit more efficient (less levels, more room for optimisations),
easy to use (you'll have single array instead of at least 3), and a bit
more flexible;  at the same way it's less widely tested...

But the end result is basically the same for both ways.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 22:49, Michael Tokarev wrote:
Bill Davidsen wrote:
[]
 RAID-10 is not the same as RAID 0+1.

It is.  Yes, there's separate module for raid10, but what it - basically -
does is the same as raid0 module over two raid1 modules will do.  It's
just a bit more efficient (less levels, more room for optimisations),
easy to use (you'll have single array instead of at least 3), and a bit
more flexible;  at the same way it's less widely tested...

And most importantly, raid10 allows you to spread the array data over an odd
number of devices while still having [at least] 2 copies of each block.

Hm, I really wished resizing was implemented for raid0 and raid10 too... ;)


-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-28 Thread Jan Engelhardt

On Jan 28 2007 22:44, Michael Tokarev wrote:
 Mdadm creates those nodes automa[tg]ically - man mdadm, search for --auto.
 
 Note that `mdadm -As` _is_ run on FC6 boot.

See above -- man mdadm, search for --auto.  -A = --assemble, -s = --scan.

Oops, thank you. So

  mdadm -A -s --auto=yes

did the right thing, and probably is what was intended. And now for
tonight's 1000$ question: why does fedora don't do that, making me edit
the mdadm -A -s line in /etc/rc.d/rc.sysvinit?


-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

On Jan 27 2007 10:42, Marc Perkel wrote:
>> >
>> >I'm using Fedora Core 6. /dev/md0 and /dev/md1, buth of which are raid
>> >1 arrays survive the reboot. But when I make a raid 0 out of those two
>> >raid arrays that's what is vanishing.
>> 
>> That's interesting. I am using Aurora Corona [FC6+RHide], and all but
>> md0 vanishes. (Reason for that is that udev does not create the nodes
>> md1-md31 on boot, so mdadm cannot assemble the arrays.)
>
>What do you have to do to get UDEV to create /dev/md2? Is there a config
>file for that?

That's the big question. On openSUSE 10.2, all the md devices get created
automatically. I suppose that happens as part of udev processing all the
queued kernel events at bootup.

On default FC6 install (i.e. without any raid), only /dev/md0 is present
(like in Aurora). That alone, and that you got md1 there, and I don't, is
strange.

I think I found it. udev does not do md at all, for some reason.
This line in /etc/rc.d/rc.sysinit is quite "offending":

[ -x /sbin/nash ] && echo "raidautorun /dev/md0" | nash --quiet

Starting a init=/bin/bash prompt and doing "/usr/sbin/udevmonitor &" 
there reveals:

bash-3.1# echo raidautorun /dev/md0 | /sbin/nash --quiet
UEVENT[1169934663.372139] add@/block/md0
bash-3.1# echo raidautorun /dev/md1 | /sbin/nash --quiet
UEVENT[1169934667.601027] add@/block/md1

No sign of md1. (Wtf here!) I can see why it's broken, but to nash every 
md device sounds like the worst solution around. I'd say the Fedora boot 
process is severely broken wrt. md. Well, what's your rc.sysinit looking 
like, since you seem to be having a md1 floating around?



-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel
Also - when running software raid 10 - what's a good
chunck size these days? Running raid 10 with 4 500 GB
SATA2 drives with 16mb buffers?



 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel

--- Jan Engelhardt <[EMAIL PROTECTED]> wrote:

> 
> On Jan 27 2007 10:31, Marc Perkel wrote:
> >--- Jan Engelhardt <[EMAIL PROTECTED]> wrote:
> >> 
> >> >I'm a little stumped trying to set up raid 10. I
> >> set
> >> >it up and it worked but after a reboot it
> forgets
> >> my
> >> >raid setup.
> >> 
> >> Now, let's hear the name of the distribution you
> >> use.
> >> 
> >> BTW, is md1 also disappearing?
> >
> >Sorry about that. I'm using Fedora Core 6. /dev/md0
> >and /dev/md1, buth of which are raid 1 arrays
> survive
> >the reboot. But when I make a raid 0 out of those
> two
> >raid arrays that's what is vanishing.
> 
> That's interesting. I am using Aurora Corona, and
> all but md0 vanishes.
> (Reason for that is that udev does not create the
> nodes md1-md31 on
> boot, so mdadm cannot assemble the arrays.)
> 


What do you have to do to get UDEV to create /dev/md2?
Is there a config file for that?



 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

On Jan 27 2007 10:31, Marc Perkel wrote:
>--- Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>> 
>> >I'm a little stumped trying to set up raid 10. I
>> set
>> >it up and it worked but after a reboot it forgets
>> my
>> >raid setup.
>> 
>> Now, let's hear the name of the distribution you
>> use.
>> 
>> BTW, is md1 also disappearing?
>
>Sorry about that. I'm using Fedora Core 6. /dev/md0
>and /dev/md1, buth of which are raid 1 arrays survive
>the reboot. But when I make a raid 0 out of those two
>raid arrays that's what is vanishing.

That's interesting. I am using Aurora Corona, and all but md0 vanishes.
(Reason for that is that udev does not create the nodes md1-md31 on
boot, so mdadm cannot assemble the arrays.)


-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel

--- Jan Engelhardt <[EMAIL PROTECTED]> wrote:

> 
> >I'm a little stumped trying to set up raid 10. I
> set
> >it up and it worked but after a reboot it forgets
> my
> >raid setup.
> 
> Now, let's hear the name of the distribution you
> use.
> 
> BTW, is md1 also disappearing?
> 

Sorry about that. I'm using Fedora Core 6. /dev/md0
and /dev/md1, buth of which are raid 1 arrays survive
the reboot. But when I make a raid 0 out of those two
raid arrays that's what is vanishing.

Thanks for your help.



 

Finding fabulous fares is fun.  
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.
http://farechase.yahoo.com/promo-generic-14795097
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

>I'm a little stumped trying to set up raid 10. I set
>it up and it worked but after a reboot it forgets my
>raid setup.

Now, let's hear the name of the distribution you use.

BTW, is md1 also disappearing?


-`J'
-- 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel
I'm a little stumped trying to set up raid 10. I set
it up and it worked but after a reboot it forgets my
raid setup.

Created 2 raid 1 arrays in md0 and md1 and that works
and survives a reboot.

However - I created a raid 0 on /dev/md2 made up of
/dev/md0 and /dev/md1 and it worked but it forgets it
after I reboot. The device /dev/md2 fails to survive a
reboot.

Created the /etc/mdadm.conf file but that doesn't seem
to have made a difference.

What am I missing? Thanks in advance.




 

Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel
I'm a little stumped trying to set up raid 10. I set
it up and it worked but after a reboot it forgets my
raid setup.

Created 2 raid 1 arrays in md0 and md1 and that works
and survives a reboot.

However - I created a raid 0 on /dev/md2 made up of
/dev/md0 and /dev/md1 and it worked but it forgets it
after I reboot. The device /dev/md2 fails to survive a
reboot.

Created the /etc/mdadm.conf file but that doesn't seem
to have made a difference.

What am I missing? Thanks in advance.




 

Don't pick lemons.
See all the new 2007 cars at Yahoo! Autos.
http://autos.yahoo.com/new_cars.html 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

I'm a little stumped trying to set up raid 10. I set
it up and it worked but after a reboot it forgets my
raid setup.

Now, let's hear the name of the distribution you use.

BTW, is md1 also disappearing?


-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel

--- Jan Engelhardt [EMAIL PROTECTED] wrote:

 
 I'm a little stumped trying to set up raid 10. I
 set
 it up and it worked but after a reboot it forgets
 my
 raid setup.
 
 Now, let's hear the name of the distribution you
 use.
 
 BTW, is md1 also disappearing?
 

Sorry about that. I'm using Fedora Core 6. /dev/md0
and /dev/md1, buth of which are raid 1 arrays survive
the reboot. But when I make a raid 0 out of those two
raid arrays that's what is vanishing.

Thanks for your help.



 

Finding fabulous fares is fun.  
Let Yahoo! FareChase search your favorite travel sites to find flight and hotel 
bargains.
http://farechase.yahoo.com/promo-generic-14795097
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

On Jan 27 2007 10:31, Marc Perkel wrote:
--- Jan Engelhardt [EMAIL PROTECTED] wrote:
 
 I'm a little stumped trying to set up raid 10. I
 set
 it up and it worked but after a reboot it forgets
 my
 raid setup.
 
 Now, let's hear the name of the distribution you
 use.
 
 BTW, is md1 also disappearing?

Sorry about that. I'm using Fedora Core 6. /dev/md0
and /dev/md1, buth of which are raid 1 arrays survive
the reboot. But when I make a raid 0 out of those two
raid arrays that's what is vanishing.

That's interesting. I am using Aurora Corona, and all but md0 vanishes.
(Reason for that is that udev does not create the nodes md1-md31 on
boot, so mdadm cannot assemble the arrays.)


-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel

--- Jan Engelhardt [EMAIL PROTECTED] wrote:

 
 On Jan 27 2007 10:31, Marc Perkel wrote:
 --- Jan Engelhardt [EMAIL PROTECTED] wrote:
  
  I'm a little stumped trying to set up raid 10. I
  set
  it up and it worked but after a reboot it
 forgets
  my
  raid setup.
  
  Now, let's hear the name of the distribution you
  use.
  
  BTW, is md1 also disappearing?
 
 Sorry about that. I'm using Fedora Core 6. /dev/md0
 and /dev/md1, buth of which are raid 1 arrays
 survive
 the reboot. But when I make a raid 0 out of those
 two
 raid arrays that's what is vanishing.
 
 That's interesting. I am using Aurora Corona, and
 all but md0 vanishes.
 (Reason for that is that udev does not create the
 nodes md1-md31 on
 boot, so mdadm cannot assemble the arrays.)
 


What do you have to do to get UDEV to create /dev/md2?
Is there a config file for that?



 

Do you Yahoo!?
Everyone is raving about the all-new Yahoo! Mail beta.
http://new.mail.yahoo.com
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Marc Perkel
Also - when running software raid 10 - what's a good
chunck size these days? Running raid 10 with 4 500 GB
SATA2 drives with 16mb buffers?



 

Now that's room service!  Choose from over 150,000 hotels
in 45,000 destinations on Yahoo! Travel to find your fit.
http://farechase.yahoo.com/promo-generic-14795097
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Raid 10 question/problem [ot]

2007-01-27 Thread Jan Engelhardt

On Jan 27 2007 10:42, Marc Perkel wrote:
 
 I'm using Fedora Core 6. /dev/md0 and /dev/md1, buth of which are raid
 1 arrays survive the reboot. But when I make a raid 0 out of those two
 raid arrays that's what is vanishing.
 
 That's interesting. I am using Aurora Corona [FC6+RHide], and all but
 md0 vanishes. (Reason for that is that udev does not create the nodes
 md1-md31 on boot, so mdadm cannot assemble the arrays.)

What do you have to do to get UDEV to create /dev/md2? Is there a config
file for that?

That's the big question. On openSUSE 10.2, all the md devices get created
automatically. I suppose that happens as part of udev processing all the
queued kernel events at bootup.

On default FC6 install (i.e. without any raid), only /dev/md0 is present
(like in Aurora). That alone, and that you got md1 there, and I don't, is
strange.

I think I found it. udev does not do md at all, for some reason.
This line in /etc/rc.d/rc.sysinit is quite offending:

[ -x /sbin/nash ]  echo raidautorun /dev/md0 | nash --quiet

Starting a init=/bin/bash prompt and doing /usr/sbin/udevmonitor  
there reveals:

bash-3.1# echo raidautorun /dev/md0 | /sbin/nash --quiet
UEVENT[1169934663.372139] add@/block/md0
bash-3.1# echo raidautorun /dev/md1 | /sbin/nash --quiet
UEVENT[1169934667.601027] add@/block/md1

No sign of md1. (Wtf here!) I can see why it's broken, but to nash every 
md device sounds like the worst solution around. I'd say the Fedora boot 
process is severely broken wrt. md. Well, what's your rc.sysinit looking 
like, since you seem to be having a md1 floating around?



-`J'
-- 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/