Re: RAID0 performance question

2005-11-26 Thread Raz Ben-Jehuda(caro)
look at the cpu consumption.

On 11/26/05, JaniD++ [EMAIL PROTECTED] wrote:
 Hello list,

 I have searching the bottleneck of my system, and found something what i
 cant cleanly understand.

 I have use NBD with 4 disk nodes. (raidtab is the bottom of mail)

 The cat /dev/nb# /dev/nullmakes ~ 350 Mbit/s on each nodes.
 The cat /dev/nb0 + nb1 + nb2 + nb3 in one time parallel makes ~ 780-800
 Mbit/s. - i think this is my network bottleneck.

 But the cat /dev/md31 /dev/null (RAID0, the sum of 4 nodes) only makes
 ~450-490 Mbit/s, and i dont know why

 Somebody have an idea? :-)

 (the nb31,30,29,28 only possible mirrors)

 Thanks
 Janos

 raiddev /dev/md1
 raid-level  1
 nr-raid-disks   2
 chunk-size  32
 persistent-superblock 1
 device  /dev/nb0
 raid-disk   0
 device  /dev/nb31
 raid-disk   1
 failed-disk /dev/nb31

 raiddev /dev/md2
 raid-level  1
 nr-raid-disks   2
 chunk-size  32
 persistent-superblock 1
 device  /dev/nb1
 raid-disk   0
 device  /dev/hb30
 raid-disk   1
 failed-disk /dev/nb30

 raiddev /dev/md3
 raid-level  1
 nr-raid-disks   2
 chunk-size  32
 persistent-superblock 1
 device  /dev/nb2
 raid-disk   0
 device  /dev/nb29
 raid-disk   1
 failed-disk /dev/nb29

 raiddev /dev/md4
 raid-level  1
 nr-raid-disks   2
 chunk-size  32
 persistent-superblock 1
 device  /dev/nb3
 raid-disk   0
 device  /dev/nb28
 raid-disk   1
 failed-disk /dev/nb28

 raiddev /dev/md31
 raid-level  0
 nr-raid-disks   4
 chunk-size  32
 persistent-superblock 1
 device  /dev/md1
 raid-disk   0
 device  /dev/md2
 raid-disk   1
 device  /dev/md3
 raid-disk   2
 device  /dev/md4
 raid-disk   3


 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
Raz
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID0 performance question

2005-11-26 Thread JaniD++
Hello, Raz,

Think this is not cpu usage problem. :-)
The system is divided to 4 cpuset, and each cpuset uses only one disknode.
(CPU0-nb0, CPU1-nb1, ...)

this top is under cat /dev/md31 (raid0)

Thanks,
Janos

 17:16:01  up 14:19,  4 users,  load average: 7.74, 5.03, 4.20
305 processes: 301 sleeping, 4 running, 0 zombie, 0 stopped
CPU0 states:  33.1% user  47.0% system0.0% nice   0.0% iowait  18.0%
idle
CPU1 states:  21.0% user  52.0% system0.0% nice   6.0% iowait  19.0%
idle
CPU2 states:   2.0% user  74.0% system0.0% nice   3.0% iowait  18.0%
idle
CPU3 states:  10.0% user  57.0% system0.0% nice   5.0% iowait  26.0%
idle
Mem:  4149412k av, 3961084k used,  188328k free,   0k shrd,  557032k
buff
   911068k active,2881680k inactive
Swap:   0k av,   0k used,   0k free 2779388k
cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 2410 root   0 -19  1584  10836 S   48.3  0.0  21:57   3 nbd-client
16191 root  25   0  4832  820   664 R48.3  0.0   3:04   0 grep
 2408 root   0 -19  1588  11236 S   47.3  0.0  24:05   2 nbd-client
 2406 root   0 -19  1584  10836 S   40.8  0.0  22:56   1 nbd-client
18126 root  18   0  5780 1604   508 D38.0  0.0   0:12   1 dd
 2404 root   0 -19  1588  11236 S   36.2  0.0  22:56   0 nbd-client
  294 root  15   0 00 0 SW7.4  0.0   3:22   1 kswapd0
 2284 root  16   0 13500 5376  3040 S 7.4  0.1   8:53   2 httpd
18307 root  16   0  6320 2232  1432 S 4.6  0.0   0:00   2 sendmail
16789 root  16   0  5472 1552   952 R 3.7  0.0   0:03   3 top
 2431 root  10  -5 00 0 SW   2.7  0.0   7:32   2 md2_raid1
29076 root  17   0  4776  772   680 S 2.7  0.0   1:09   3 xfs_fsr
 6955 root  15   0  1588  10836 S 2.7  0.0   0:56   2 nbd-client

- Original Message - 
From: Raz Ben-Jehuda(caro) [EMAIL PROTECTED]
To: JaniD++ [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Saturday, November 26, 2005 4:56 PM
Subject: Re: RAID0 performance question


 look at the cpu consumption.

 On 11/26/05, JaniD++ [EMAIL PROTECTED] wrote:
  Hello list,
 
  I have searching the bottleneck of my system, and found something what i
  cant cleanly understand.
 
  I have use NBD with 4 disk nodes. (raidtab is the bottom of mail)
 
  The cat /dev/nb# /dev/nullmakes ~ 350 Mbit/s on each nodes.
  The cat /dev/nb0 + nb1 + nb2 + nb3 in one time parallel makes ~ 780-800
  Mbit/s. - i think this is my network bottleneck.
 
  But the cat /dev/md31 /dev/null (RAID0, the sum of 4 nodes) only makes
  ~450-490 Mbit/s, and i dont know why
 
  Somebody have an idea? :-)
 
  (the nb31,30,29,28 only possible mirrors)
 
  Thanks
  Janos
 
  raiddev /dev/md1
  raid-level  1
  nr-raid-disks   2
  chunk-size  32
  persistent-superblock 1
  device  /dev/nb0
  raid-disk   0
  device  /dev/nb31
  raid-disk   1
  failed-disk /dev/nb31
 
  raiddev /dev/md2
  raid-level  1
  nr-raid-disks   2
  chunk-size  32
  persistent-superblock 1
  device  /dev/nb1
  raid-disk   0
  device  /dev/hb30
  raid-disk   1
  failed-disk /dev/nb30
 
  raiddev /dev/md3
  raid-level  1
  nr-raid-disks   2
  chunk-size  32
  persistent-superblock 1
  device  /dev/nb2
  raid-disk   0
  device  /dev/nb29
  raid-disk   1
  failed-disk /dev/nb29
 
  raiddev /dev/md4
  raid-level  1
  nr-raid-disks   2
  chunk-size  32
  persistent-superblock 1
  device  /dev/nb3
  raid-disk   0
  device  /dev/nb28
  raid-disk   1
  failed-disk /dev/nb28
 
  raiddev /dev/md31
  raid-level  0
  nr-raid-disks   4
  chunk-size  32
  persistent-superblock 1
  device  /dev/md1
  raid-disk   0
  device  /dev/md2
  raid-disk   1
  device  /dev/md3
  raid-disk   2
  device  /dev/md4
  raid-disk   3
 
 
  -
  To unsubscribe from this list: send the line unsubscribe linux-raid in
  the body of a message to [EMAIL PROTECTED]
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 


 --
 Raz

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Booting from raid1 -- md: invalid raid superblock magic on sdb1

2005-11-26 Thread David M. Strang

Hello all --

I've read, and read, and read -- and I'm still not having ANY luck booting 
completely from a raid1 device.


This is my setup...

sda1 is booting, working great. I'm attempting to transition to a bootable 
raid1.


sdb1 is a 400GB partition -- it is type FD.

Disk /dev/sdb: 400.0 GB, 400088457216 bytes
2 heads, 4 sectors/track, 97677846 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sdb1   197677846   390711382   fd  Linux raid 
autodetect


I have created my raid1 mirror with the following command:

mdadm --create /dev/md_d0 -e1 -ap --level=1 --raid-devices=2 
missing,/dev/sdb1


The raid created correctly, I then partitioned md_d0 to match sda1.



Disk /dev/sda: 400.0 GB, 400088457216 bytes
2 heads, 4 sectors/track, 97677846 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sda1   *   125009998   83  Linux
/dev/sda2250195677734   282710936   83  Linux
/dev/sda39567773596177734 200   83  Linux
/dev/sda49617773597677824 6000360   82  Linux swap / Solaris


Disk /dev/md_d0: 400.0 GB, 400088444928 bytes
2 heads, 4 sectors/track, 97677843 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

 Device Boot  Start End  Blocks   Id  System
/dev/md_d0p1   *   125009998   83  Linux
/dev/md_d0p2250195677734   282710936   83  Linux
/dev/md_d0p39567773596177734 200   83  Linux
/dev/md_d0p49617773597677843 6000436   82  Linux swap / 
Solaris



Here is my lilo.conf settings; I'm attempting to get it to mount md_d0 when 
the kernel starts. I realize that automount (type FD) no longer functions 
with a version-1 superblock.


#
# /etc/lilo.conf: lilo(8) configuration, see lilo.conf(5)
#

lba32
install=text
boot=/dev/sda
map=/boot/System.map
image=/vmlinuz
   label=CRUX
   root=/dev/sda1
   read-only
   append=quiet md=d0,/dev/sdb1

# End of file


Without fail, every time the system boots -- I get the following message:

md: invalid raid superblock magic on sdb1

The odd thing is; if I login -- I can execute the following:

-([EMAIL PROTECTED])-(~)- # mdadm -A /dev/md_d0 /dev/sdb1
mdadm: /dev/md_d0 has been started with 1 drive (out of 2).
-([EMAIL PROTECTED])-(~)- # mdadm -E /dev/sdb1
/dev/sdb1:
 Magic : a92b4efc
   Version : 01.00
Array UUID : 0d7a60d0:af2843fe:cde2a4dc:207bbd63
  Name : CRUX x64
 Creation Time : Sat Nov 26 03:24:23 2005
Raid Level : raid1
  Raid Devices : 2

   Device Size : 781422744 (372.61 GiB 400.09 GB)
  Super Offset : 781422744 sectors
 State : clean
   Device UUID : 86be5e9e:b7c740ab:46bfb508:090a0e42
   Update Time : Sat Nov 26 04:03:43 2005
  Checksum : ebed5cb5 - correct
Events : 168


  Array State : _U 1 failed
-([EMAIL PROTECTED])-(~)- # mdadm -Q /dev/sdb1
/dev/sdb1: is not an md array
/dev/sdb1: device 1 in 2 device unknown raid1 array.  Use mdadm --examine 
for more detail.



I'm really confused; mdadm seems to recognize it and load it fine... why 
can't I get the kernel to load it?


-- David M. Strang 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID0 performance question

2005-11-26 Thread Lajber Zoltan
On Sat, 26 Nov 2005, JaniD++ wrote:

 Hello, Raz,

 Think this is not cpu usage problem. :-)
 The system is divided to 4 cpuset, and each cpuset uses only one disknode.
 (CPU0-nb0, CPU1-nb1, ...)

Seams to be CPU problem. Which kind of NIC do you have?

 CPU2 states:   2.0% user  74.0% system0.0% nice   3.0% iowait  18.0%
 idle
 CPU3 states:  10.0% user  57.0% system0.0% nice   5.0% iowait  26.0%

Do you have 4 cpu, or 2 HT cpu?

Bye,
-=Lajbi=
 LAJBER Zoltan   Szent Istvan Egyetem,  Informatika Hivatal
 Most of the time, if you think you are in trouble, crank that throttle!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Booting from raid1 -- md: invalid raid superblock magic on sdb1

2005-11-26 Thread David M. Strang

On Saturday 26 November 2005 12:16:14, Guillaume Filion wrote:

Le 05-11-26, à 11:18, David M. Strang a écrit :
 md: invalid raid superblock magic on sdb1

Did you include the md and raid1 modules in mkinirtd.conf?

I'll admit that while I have my root on a raid1 device, I'm a bit confused 
by the whole process... :-/




Well, actually... I don't use an initrd. All my drivers are straight 
compiled into the kernel.



I found this document particularly helpful:
http://xtronics.com/reference/SATA-RAID-Debian.htm



Good document, I've looked at that one a bit too. It seems that the md.txt 
from the kernel Documentation is perhaps what is throwing me awry.


-- David M. Strang 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Booting from raid1 -- md: invalid raid superblock magic on sdb1

2005-11-26 Thread David M. Strang

On Saturday 26 November 2005 11:14:46, David M. Strang wrote:

sdb1 is a 400GB partition -- it is type FD.

Disk /dev/sdb: 400.0 GB, 400088457216 bytes
2 heads, 4 sectors/track, 97677846 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   197677846   390711382   fd  Linux raid 
autodetect




OK, I've dumped the autodetect raid type. It seems it's pointless on v1.0 
superblocks -- it was just generating an 'extra' error.


Disk /dev/sdb: 400.0 GB, 400088457216 bytes
2 heads, 4 sectors/track, 97677846 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

  Device Boot  Start End  Blocks   Id  System
/dev/sdb1   197677846   390711382   83  Linux


Here is my lilo.conf settings; I'm attempting to get it to mount md_d0 
when the kernel starts. I realize that automount (type FD) no longer 
functions with a version-1 superblock.


#
# /etc/lilo.conf: lilo(8) configuration, see lilo.conf(5)
#

lba32
install=text
boot=/dev/sda
map=/boot/System.map
image=/vmlinuz
label=CRUX
root=/dev/sda1
read-only
append=quiet md=d0,/dev/sdb1

# End of file


Without fail, every time the system boots -- I get the following message:

md: invalid raid superblock magic on sdb1



This still persists. Are there some patches to the kernel or something that 
I need for it to recognize v1.0 superblocks? I'm running Linux v2.6.14.3


As I mentioned before, mdadm can state the array w/o any issues at all...

-- David M. Strang 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID0 performance question

2005-11-26 Thread Lajber Zoltan
Hi,

If you don't speak hungarian, forget this sentence:

Beszelsz magyarul? akkor folytathatjuk ugy is.


On Sat, 26 Nov 2005, JaniD++ wrote:

 Intel xeon motherboard, intel e1000 x2. (64bit)
 But i already write that, if i cut out the raid, and starts the 4 cat at one
 time the traffic is rise to 780-800 Mbit! :-)

 This is not hardware related problem.
 Only tune, or missconfiguration problem.  - I think...

What is in the /proc/interrupts? interruts distibuted over cpus, or all
irq goes for one cpu? What about, if you switch off HT?

Bye,
-=Lajbi=
 LAJBER Zoltan   Szent Istvan Egyetem,  Informatika Hivatal
 Most of the time, if you think you are in trouble, crank that throttle!
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Booting from raid1 -- md: invalid raid superblock magic on sdb1

2005-11-26 Thread David M. Strang

On Saturday 26 November 2005 14:03:35, David M. Strang wrote:

On Saturday 26 November 2005 11:14:46, David M. Strang wrote:
 sdb1 is a 400GB partition -- it is type FD.

 Disk /dev/sdb: 400.0 GB, 400088457216 bytes
 2 heads, 4 sectors/track, 97677846 cylinders
 Units = cylinders of 8 * 512 = 4096 bytes

Device Boot  Start End  Blocks   Id  System
 /dev/sdb1   197677846   390711382   fd  Linux raid 
 autodetect


OK, I've dumped the autodetect raid type. It seems it's pointless on v1.0 
superblocks -- it was just generating an 'extra' error.


Disk /dev/sdb: 400.0 GB, 400088457216 bytes
2 heads, 4 sectors/track, 97677846 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   197677846   390711382   83  Linux

 Here is my lilo.conf settings; I'm attempting to get it to mount md_d0 
 when the kernel starts. I realize that automount (type FD) no longer 
 functions with a version-1 superblock.


 #
 # /etc/lilo.conf: lilo(8) configuration, see lilo.conf(5)
 #

 lba32
 install=text
 boot=/dev/sda
 map=/boot/System.map
 image=/vmlinuz
 label=CRUX
 root=/dev/sda1
 read-only
 append=quiet md=d0,/dev/sdb1

 # End of file


 Without fail, every time the system boots -- I get the following 
 message:


 md: invalid raid superblock magic on sdb1


This still persists. Are there some patches to the kernel or something 
that I need for it to recognize v1.0 superblocks? I'm running Linux 
v2.6.14.3


As I mentioned before, mdadm can state the array w/o any issues at all...


Please forgive me for pinging you direct on this Neil; but I fear you are 
the only one who can answer it.


I am a bit considered by this in the startup log:

Nov 26 21:47:35 xenogenesis kernel: md: raid1 personality registered as nr 3
Nov 26 21:47:35 xenogenesis kernel: md: md driver 0.90.2 MAX_MD_DEVS=256, 
MD_SB_DISKS=27

Nov 26 21:47:35 xenogenesis kernel: md: bitmap version 3.39

While mdadm lets me start a v1.0 superblock; I fear that I missing some 
level of kernel patch.


Nov 26 21:47:35 xenogenesis kernel: md: Loading md_d0: /dev/sdb1
Nov 26 21:47:35 xenogenesis kernel: md: invalid raid superblock magic on 
sdb1

Nov 26 21:47:35 xenogenesis kernel: md: sdb1 has invalid sb, not importing!
Nov 26 21:47:35 xenogenesis kernel: md: md_import_device returned -22

-([EMAIL PROTECTED])-(/usr/src/linux-2.6.14.3/drivers/md)- # mdadm -A 
/dev/md_d0 /dev/sdb1

mdadm: /dev/md_d0 has been started with 1 drive (out of 2).
-([EMAIL PROTECTED])-(/usr/src/linux-2.6.14.3/drivers/md)- # mdadm --detail 
/dev/md_d0

/dev/md_d0:
   Version : 01.00.02
 Creation Time : Sat Nov 26 10:20:11 2005
Raid Level : raid1
Array Size : 390711372 (372.61 GiB 400.09 GB)

I've searched the linux-raid archive; but all the patches I see that might 
be relevent, have long been applied to the tree.


-- David M. Strang 


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mdadm HOWTO

2005-11-26 Thread Mike Smith
Hi,

I've been using mdadm for over a year now and really
like the utility.

Can someone point me to a howto type document that
shows how
to do various tasks.  Like recovering a metadevice.  I
suspect my situation is not unique and would probably
be
covered.

I have a mother board with 3 sata raid controllers.
I have two drives on the first that I use for the OS 
and a mirror of the OS using mirrordir.

I have two 250GB drives on the second sata controller
and
two on the third.  I have put these four drives in a
RAID5 metadevice ( /dev/md0 ) using mdadm.  I then use
LVM to slice/dice md0 into the various filesystems I
need.

I've run this way for over a year and now want to
upgrade my
OS.  I have been running Mandrake 10.1, and want to
move to
Mandrivia 2006.

My first problem is that my devices are renumbered
with the
new OS install.

MandrakeMandrivia
1st Controllersda sda
  sdc sdb

2nd Controllersdb sdc
  sdd sdd

3rd Controllersde sde
  sdf sdf

Originally,  I created the RAID5 device using sdb,
sdd, sde, and sdf.
Under Mandrivia,  I was successful in using mdadm to
assemble a
RAID5 device, but trying to mount the filesystems (
/dev/vg1/video )
it said they did not exist.

I can reboot under my original Mandrake 10.1 install
and everything
is still there.  So,  how do I recover these
filesystems under the
new OS?

Please reply to [EMAIL PROTECTED] as I'm NOT on
this alias.

Thanks,

Mike





__ 
Yahoo! Mail - PC Magazine Editors' Choice 2005 
http://mail.yahoo.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html