Re: 4 disks in raid 5: 33MB/s read performance?

2006-05-25 Thread Dexter Filmore
 On Monday May 22, [EMAIL PROTECTED] wrote:
  I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
  Isn't that a little slow?
  System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint
  250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB.

 Yes, read on raid5 isn't as fast as we might like at the moment.

 It looks like you are getting about 11MB/s of each disk which is
 probably quite a bit slower than they can manage (what is the
 single-drive read speed you get dding from /dev/sda or whatever).

 You could try playing with the readahead number (blockdev --setra/--getra).
 I'm beginning to think that the default setting is a little low.

Changed from 384 to 1024, no improvement.


 You could also try increasing the stripe-cache size by writing numbers
 to
/sys/block/mdX/md/stripe_cache_size

Actually, there's no directory /sys/block/md0/md/ here. Can I find that in 
proc somewhere? And what are sane numbers for this setting?

 I wonder if your SATA  controller is causing you grief.
 Could you try
dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024
 and then do the same again on all devices in parallel
 e.g.
dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024 
dd if=/dev/SOMEOTHERDISK of=/dev/null bs=1024k count=1024 
...

 4112 pts/0R  0:00 dd if /dev/sda of /dev/null bs 1024k count 1024
 4113 pts/0R  0:00 dd if /dev/sdb of /dev/null bs 1024k count 1024
 4114 pts/0R  0:00 dd if /dev/sdc of /dev/null bs 1024k count 1024
 4115 pts/0R  0:00 dd if /dev/sdd of /dev/null bs 1024k count 1024
 4116 pts/0R+ 0:00 ps ax

1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 34,5576 Sekunden, 31,1 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 36,073 Sekunden, 29,8 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5109 Sekunden, 26,5 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5054 Sekunden, 26,5 MB/s

(Partly german, but I think you get it)

A single disks pumps out 65-70MB/s. Since they are on a PCI 32bit controller 
the combined speeds when reading from all four disks at once pretty much max 
the 133MB/s PCI limit. (I' surprised it comes so close. That controller works 
pretty well for 18 bucks.)

Dex


-- 
-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(+)@ s-:+ a- C+++() UL+ P+++ L+++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h++ r%* y?
--END GEEK CODE BLOCK--

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mdadm and 2.4 kernel?

2006-05-25 Thread Ste

Hi, for various reasons i'll need to run mdadm on a 2.4 kernel.
Now I have 2.4.32 kernel.

Take a look:

[EMAIL PROTECTED]:~# mdadm --create --verbose /dev/md0 --level=1 
--bitmap=/root/md0bitmap -n 2 /dev/nda /dev/ndb --force --assume-clean

mdadm: /dev/nda appears to be part of a raid array:
   level=raid1 devices=2 ctime=Thu May 25 20:10:47 2006
mdadm: /dev/ndb appears to be part of a raid array:
   level=raid1 devices=2 ctime=Thu May 25 20:10:47 2006
mdadm: size set to 39118144K
Continue creating array? y
mdadm: Warning - bitmaps created on this kernel are not portable
 between different architectured.  Consider upgrading the Linux kernel.
mdadm: Cannot set bitmap file for /dev/md0: No such device


[EMAIL PROTECTED]:~# mdadm --create --verbose /dev/md0 --level=1  -n 2 /dev/nda 
/dev/ndb --force --assume-clean

mdadm: /dev/nda appears to be part of a raid array:
   level=raid1 devices=2 ctime=Thu May 25 20:10:47 2006
mdadm: /dev/ndb appears to be part of a raid array:
   level=raid1 devices=2 ctime=Thu May 25 20:10:47 2006
mdadm: size set to 39118144K
Continue creating array? y
mdadm: SET_ARRAY_INFO failed for /dev/md0: File exists
[EMAIL PROTECTED]:~# 

Obviously the devices /dev/nda and /dev/ndb exists (i can make fdisk 
on them).


Can someone help me?
Thanks.
Stefano.

   




-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mdadm: bitmap size

2006-05-25 Thread Ste

Ste wrote:

raidhotadd doesn't know anything about bitmaps.
If you use 'mdadm /dev/mda --add /dev/nda' you should find that it
works better.

I recommend getting rid of setfaulty / raidhotadd /raidhotremove etc
and just using mdadm.

Okay now it works really fine. You could write somewere to not mix mdadm 
and standard raidtools.


Thanks a lot!
Bye! :-)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 kicks non-fresh drives

2006-05-25 Thread Neil Brown
On Thursday May 25, [EMAIL PROTECTED] wrote:
 
 From dmesg
 md: Autodetecting RAID arrays.
 md: autorun ...
 md: considering sdl1 ...
 md:  adding sdl1 ...
 md:  adding sdi1 ...
 md:  adding sdh1 ...
 md:  adding sdg1 ...
 md:  adding sdf1 ...
 md:  adding sde1 ...
 md:  adding sdd1 ...
 md:  adding sdc1 ...
 md:  adding sdb1 ...
 md:  adding sda1 ...
 md:  adding hdc1 ...
 md: created md0
 
 The kernel didn't add sdj or sdk.
 

And the partition types of sdj1 and sdk1 are ???

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 kicks non-fresh drives

2006-05-25 Thread Craig Hollabaugh
Neil,

sdj and sdk are FS Type 'Linux'.
all the other partitions are FS Type 'Linux raid autodetect'

I don't remember ever having to set the partition type. It's not in my
build notes for this machine or another 13 drive server. 

Should I change partition type to 'Linux raid autodetect'? 

If so, how can I verify array configuration prior to rebooting? 

Thanks for the reply Neil. I never checked that.
Craig
ps. My new drives are certainly getting a workout through this learning
process. 




On Fri, 2006-05-26 at 07:18 +1000, Neil Brown wrote:
 On Thursday May 25, [EMAIL PROTECTED] wrote:
  
  From dmesg
  md: Autodetecting RAID arrays.
  md: autorun ...
  md: considering sdl1 ...
  md:  adding sdl1 ...
  md:  adding sdi1 ...
  md:  adding sdh1 ...
  md:  adding sdg1 ...
  md:  adding sdf1 ...
  md:  adding sde1 ...
  md:  adding sdd1 ...
  md:  adding sdc1 ...
  md:  adding sdb1 ...
  md:  adding sda1 ...
  md:  adding hdc1 ...
  md: created md0
  
  The kernel didn't add sdj or sdk.
  
 
 And the partition types of sdj1 and sdk1 are ???
 
 NeilBrown
 
-- 

Dr. Craig Hollabaugh, [EMAIL PROTECTED], 970 240 0509
Author of Embedded Linux: Hardware, Software and Interfacing
www.embeddedlinuxinterfacing.com

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID5 kicks non-fresh drives

2006-05-25 Thread Craig Hollabaugh
On Fri, 2006-05-26 at 07:18 +1000, Neil Brown wrote:
 And the partition types of sdj1 and sdk1 are ???

Neil,

That did it! I set the partition FS Types from 'Linux' to 'Linux raid
autodetect' after my last re-sync completed. Manually stopped and
started the array. Things looked good, so I crossed my fingers and
rebooted. The kernel found all the drives and all is happy here in
Colorado.

Thanks ever so much for your comment!!

Craig


After the reboot

[EMAIL PROTECTED]: mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
  Creation Time : Thu Jan 16 09:10:52 2003
 Raid Level : raid5
 Array Size : 1289056384 (1229.34 GiB 1319.99 GB)
Device Size : 117186944 (111.76 GiB 120.00 GB)
   Raid Devices : 12
  Total Devices : 13
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Thu May 25 16:21:28 2006
  State : clean
 Active Devices : 12
Working Devices : 13
 Failed Devices : 0
  Spare Devices : 1

 Layout : left-symmetric
 Chunk Size : 128K

   UUID : 4d862825:91140f1a:eb97e7f2:9bfa2403
 Events : 0.2684360

Number   Major   Minor   RaidDevice State
   0   810  active sync   /dev/sda1
   1   8   171  active sync   /dev/sdb1
   2   8   332  active sync   /dev/sdc1
   3   8  1293  active sync   /dev/sdi1
   4   8   654  active sync   /dev/sde1
   5   8   815  active sync   /dev/sdf1
   6   8   976  active sync   /dev/sdg1
   7   8  1137  active sync   /dev/sdh1
   8   8   498  active sync   /dev/sdd1
   9   8  1619  active sync   /dev/sdk1
  10  221   10  active sync   /dev/hdc1
  11   8  177   11  active sync   /dev/sdl1

  12   8  145-  spare   /dev/sdj1





-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: problems with raid=noautodetect - solved

2006-05-25 Thread Nix
On 24 May 2006, Florian Dazinger uttered the following:
 Neil Brown wrote:
 Presumably you have a 'DEVICE' line in mdadm.conf too?  What is it.
 My first guess is that it isn't listing /dev/sdd? somehow.
 Otherwise, can you add a '-v' to the mdadm command that assembles the
 array, and capture the output.  That might be helpful.
 NeilBrown

 stupid me! I had a DEVICE section, but somehow forgot about my /dev/sdd drive.

`DEVICE partitions' is generally preferable for that reason, unless
you have entries in /proc/partitions which you explicitly want to
exclude from scanning for RAID superblocks.

-- 
`On a scale of 1-10, X's brokenness rating is 1.1, but that's only
 because bringing Windows into the picture rescaled brokenness by
 a factor of 10.' --- Peter da Silva
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Does software RAID take advantage of SMP, or 64 bit CPU(s)?

2006-05-25 Thread Nix
On 23 May 2006, Neil Brown noted:
 On Monday May 22, [EMAIL PROTECTED] wrote:
 A few simple questions about the 2.6.16+ kernel and software RAID.
 Does software RAID in the 2.6.16 kernel take advantage of SMP?
 
 Not exactly.  RAID5/6 tends to use just one cpu for parity
 calculations, but that frees up other cpus for doing other important
 work.

To expand on this, that depends on how many RAID arrays you've got,
since there's one parity-computation daemon per array.

If you have several arrays and are writing to them at the same time,
or several arrays and some are degraded, then several md*_raid*
daemons might be working at once.

But that's not very likely, I'd guess. (I have multiple RAID-5 arrays,
but that's only because I'm trying to get useful RAIDing on multiple
disks of drastically different size.)

-- 
`On a scale of 1-10, X's brokenness rating is 1.1, but that's only
 because bringing Windows into the picture rescaled brokenness by
 a factor of 10.' --- Peter da Silva
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html