my io testing scripts

2008-02-11 Thread Keld Jørn Simonsen
Here are my testing scripts used in the performance howto:
http://linux-raid.osdl.org/index.php/Home_grown_testing_methods

=Hard disk performance scripts=
Here are the scripts that I used for my performance measuring. Use at your own 
risk.
They destroy the contents of the partitions involved. The /dev/md raid needs to
be stopped before initiating the test.

Copyright Keld Simonsen, [EMAIL PROTECTED] 2008. Licensed under the GPL.

iotest:
   
   #!/bin/sh
   # invoked by
   # iotest mdadm -R -C /dev/md1 --chunk=256 -l 10 -n 2 -p f2   /dev/md1 
/mnt/md1 ext3  /dev/hdb5 /dev/hdd5 
   echo \n $1 $5 \n  /tmp/results
   echo $1 $5
   $1 $5
   mkfs -t $4 $2
   mkdir $3
   mount $2 $3
   cd $3
   echo \nmakefiles\n  /tmp/results
   mkfiles 200
   echo \n remakefiles \n /tmp/results
   mkfiles 200
   echo \n catall \n /tmp/results
   cat * /dev/null
   echo \n catnull \n /tmp/results
   catnull
   cd
   umount $2
   mdadm -S $2
   echo \n finish  $1   $5 \n   /tmp/results
   
Be careful with this script, and remember to change the ordinary test to only 
one partition
   
iorun:   
   #!/bin/sh
   # set up ram disk
   DISKS=/dev/sda2 /dev/sdb2
   iostat -k 10  /tmp/results 
   iotest /dev/sda2 
/mnt/sda2 ext3  
   iotest mdadm -C /dev/md1 --chunk=256 -R -l  0 -n 2   /dev/md1 /mnt/md1 
ext3 $DISKS
   iotest mdadm -C /dev/md1 --chunk=256 -R -l  1 -n 2   /dev/md1 /mnt/md1 
ext3 $DISKS
   iotest mdadm -C /dev/md1 --chunk=256 -R -l 10 -n 2   /dev/md1 /mnt/md1 
ext3 $DISKS
   iotest mdadm -C /dev/md1 --chunk=256 -R -l 10 -n 2 -p f2  /dev/md1 
/mnt/md1 ext3 $DISKS
   # iotest mdadm -C /dev/md1 --chunk=256 -R -l 10 -n 2 -p o2  /dev/md1 
/mnt/md1 ext3 $DISKS
   
mkfiles:   
   #!/bin/sh
   for (( i = 1; i  $1 ; i++ )) ; do dd if=/dev/hda1 of=$i bs=1MB count=40 ; 
done
   for (( i = 1; i  $1 ; i++ )) ; do dd if=/dev/hda1 of=$i bs=1MB count=40  ; 
done
   
catnull:   
   #!/bin/tcsh
   foreach i ( * ) 
cat $i /dev/null 
   end
   wait
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


howto on performance

2008-02-11 Thread Keld Jørn Simonsen
I have put up a new howto text on performance:
http://linux-raid.osdl.org/index.php/Performance#Performance_of_raids_with_2_disks

Enjoy!
Keld

=Performance of raids with 2 disks=

I have made some testing of performance of different types of RAIDs,
with 2 disks involved. I have used my own home grown testing methods,
which are quite simple, to test sequential and random reading and writing of 
200 files of 40 MB. The tests were meant to see what performance I could 
get out of a system mostly oriented towards file serving, such as a mirror site.

My configuration was

1800 MHz AMD Sempron(tm) Processor 3100+
1500 MB RAM
2 x  Hitachi Ultrastar SCSI-II 1 TB.
Linux version 2.6.12-26mdk

Figures are in MB/s, and the file system was ext3. Times were measured with 
iostat,
and an estimate for steady performance was taken. The times varied quite 
a lot over the different 10 second intervals, for example the estimate 155 MB/s
ranged from 135 MB/s to 163 MB/s. I then looked at the avearge over the period 
when
a test was running in full scale (all processes started, and none stopped).

RAID type  sequential read random readsequential write   random 
write
Ordinary disk   82 34 67
56
RAID0  155 80 97
80
RAID1   80 35 72
55
RAID10  79 56 69
48
RAID10,f2  150 79 70
55

Random read for RAID1 and RAID10 were quite unbalanced, almost only coming out 
of one of the disks.

The results are quite as expected:

RAID0 and RAID10,f2 reads are double speed compared to ordinary file system for 
sequential reads
(155 vs 82) and more than double for random reads (80 vs 35).

Writes (both sequential and random) are roughly the same for ordinary disk, 
RAID1, RAID10 and
RAID10,f2, around 70 MB/s for sequential, and 55 MB/s for random.

Sequential reads are about the same (80 MB/s) for ordinary partition, RAID1 and 
RAID10.

Rndom reads for ordinary partition and RAID1 is about the same (35 MB/s) and 
about 50 % higher for
RAID10. I am puzzled why RAID10 is faster than RAID1 here.

All in all RAID10,f2 is the fastest mirrored RAID for both sequential and 
random reading for this test,
while it is about equal with the other mirrored RAIDs when writing.

My kernel did not allow me to test RAID10,o2 as this is only supported from 
kernel 2.6.18.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Use new sb type

2008-02-11 Thread Bill Davidsen

David Greaves wrote:

Jan Engelhardt wrote:
  

Feel free to argue that the manpage is clear on this - but as we know, not
everyone reads the manpages in depth...
  

That is indeed suboptimal (but I would not care since I know the
implications of an SB at the front)



Neil cares even less and probably  doesn't even need mdadm - heck he probably
just echos the raw superblock into place via dd...

http://xkcd.com/378/
  


I don't know why this makes me think of APL...

--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


transferring RAID-1 drives via sneakernet

2008-02-11 Thread Jeff Breidenbach
I'm planning to take some RAID-1 drives out of an old machine
and plop them into a new machine. Hoping that mdadm assemble
will magically work. There's no reason it shouldn't work. Right?

old  [ mdadm v1.9.0 / kernel 2.6.17 / Debian Etch / x86-64 ]
new [ mdad v2.6.2 / kernel 2.6.22 / Ubuntu 7.10 server ]
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 3ware and erroneous multipathing - duplicate serial numbers (was: 3ware and dmraid)

2008-02-11 Thread Heinz Mauelshagen
On Fri, Jan 18, 2008 at 11:54:29AM -0800, Ask Bjørn Hansen wrote:

 On Jan 18, 2008, at 4:33 AM, Heinz Mauelshagen wrote:

 Much later I figured out that dmraid -b reported two of the disks as
 being the same:

 Looks like the md sync duplicated the metadata and dmraid just spots
 that duplication. You gotta remove one of the duplicates to clean this up
 but check first which to pick in case the sync was partial only.


 The event counter is the same on both; is that what I should look for?

 Is there a way to reset the dmraid metadata?   I'm not actually using 
 dmraid, I use regular software raid so I think I just need to reset the 
 dmraid data...

dmraid -rD ...

Heinz



  - ask

 -- 
 http://develooper.com/ - http://askask.com/


 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Heinz Mauelshagen Red Hat GmbH
Consulting Development Engineer   Am Sonnenhang 11
Storage Development   56242 Marienrachdorf
  Germany
[EMAIL PROTECTED]PHONE +49  171 7803392
  FAX   +49 2626 924446
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mdadm 2.6 creates slow RAID 5 while mdadm 2.5.6 rocks

2008-02-11 Thread Hubert Verstraete

[EMAIL PROTECTED] wrote:

Quoting Hubert Verstraete:


Hi All,

My RAID 5 array is running slow.
I've made a lot of test to find out where this issue is laying.
I've come to the conclusion that once the array is created with mdadm
2.6.x (up to 2.6.4), whatever the kernel you run, whatever the mdadm
you use to re-assemble the array, the array's performance is very
degraded.

Would this be a bug in mdadm 2.6 ?
Are you seeing this issue too ?

I may have seen this before too.
What happens if you don't make an array that is partitionable?
Just create an /dev/mdx device, or, if you must use a partitionable 
array,

what happens to your benchmarks on your 2nd partition of your array?
Say, /dev/md_d0p2 ?

My symptons were similar that any partitionable Raid 5 array would be 
slower, but ony on the first partition.

mdadm version 2.5.6
kernel 2.6.18

Mike

Thanks for the idea.
I've tried with a non partitionable array and with a 2nd partition and 
got the same damn slow result on write performance :(


I'm appending the two new tests to the bonnie results:
2.6.18.8_mdadm_2.5.6,4G,,,38656,5,24171,6,,,182130,26,518.9,1,16,1033,3,+,+++,861,2,1224,3,+,+++,806,3 

2.6.18.8_mdadm_2.6.4,4G,,,19191,2,15845,4,,,164907,26,491.9,1,16,697,2,+,+++,546,1,710,2,+,+++,465,2 

2.6.22.6_mdadm_2.5.6,4G,,,49108,8,29441,7,,,174038,21,455.5,1,16,1351,4,+,+++,1073,3,1416,5,+,+++,696,4 

2.6.22.6_mdadm_2.6.4,4G,,,18010,3,16763,4,,,185106,24,421.6,1,16,928,6,+,+++,659,3,871,7,+,+++,699,3 

2.6.24-git17_mdadm_2.5.6,4G,,,126319,24,34342,4,,,79924,0,180.8,0,16,1566,5,+,+++,1459,3,1800,4,+,+++,1123,2 

2.6.24-git17_mdadm_2.6.4,4G,,,24482,4,19717,3,,,79953,0,594.6,2,16,918,3,+,+++,715,2,907,3,+,+++,763,2 

2.6.24-git17_mdadm_2.6.4_partition_2,4G,,,24338,4,21351,4,,,170408,19,580.7,1,16,933,3,+,+++,889,3,895,3,+,+++,725,2 

2.6.24-git17_mdadm_2.6.4_non_partitionable,4G,,,23798,4,20845,4,,,169994,19,627.7,1,16,1257,3,+,+++,1068,3,1180,4,+,+++,872,2 



Nevertheless, in the 2 tests, the read performance is back to the one I 
had in 2.6.22 and before. There might be a regression in 2.6.24 for 
reading on the first partition of a partionable array...


Hubert
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Use new sb type

2008-02-11 Thread David Greaves
Jan Engelhardt wrote:
 Feel free to argue that the manpage is clear on this - but as we know, not
 everyone reads the manpages in depth...
 
 That is indeed suboptimal (but I would not care since I know the
 implications of an SB at the front)

Neil cares even less and probably  doesn't even need mdadm - heck he probably
just echos the raw superblock into place via dd...

http://xkcd.com/378/

:D

David
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html