Re: raid10: unfair disk load?

2007-12-25 Thread Bill Davidsen

Richard Scobie wrote:

Jon Nelson wrote:


My own tests on identical hardware (same mobo, disks, partitions,
everything) and same software, with the only difference being how
mdadm is invoked (the only changes here being level and possibly
layout) show that raid0 is about 15% faster on reads than the very
fast raid10, f2 layout. raid10,f2 is approx. 50% of the write speed of
raid0.


This more or less matches my testing.


Have you tested a stacked RAID 10 made up of 2 drive RAID1 arrays, 
striped together into a RAID0.


That is not raid10, that's raid1+0. See man md.


I have found this configuration to offer very good performance, at the 
cost of slightly more complexity.


It does, raid0 can be striped over many configurations, raid[156] being 
most common.


--
Bill Davidsen [EMAIL PROTECTED]
 Woe unto the statesman who makes war without a reason that will still
 be valid when the war is over... Otto von Bismark 



-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-23 Thread Michael Tokarev
maobo wrote:
 Hi,all
 Yes, Raid10 read balance is the shortest position time first and
 considering the sequential access condition. But its performance is
 really poor from my test than raid0.

Single-stream write performance of raid0, raid1 and raid10 should be
of similar level (with raid5 and raid6 things are different) -- in all
3 cases, it should be near the write speed of a single drive.  The
only possible problematic cases is when you've some unlucky hardware
which does not permit writing into two drives in parallel - in which
case raid1 and raid10 write speed should be less than to raid0 and
single drive.  But even ol'good IDE drives/controllers, even if two
disks are on the same channel, permits parallel writes.  Modern SATA
and SCSI/SAS should be no problem - hopefully, modulo (theoretically)
some very cheap lame controllers.

 I think this is the process flow raid10 influence. But RAID0 is so
 simple and performed very well!
 From this point that striping is better than mirroring! RAID10 is
 stipe+mirror. But for write condition it performed really bad than RAID0.
 Isn't it?

No it's not.  When the hardware (and drivers) is sane anyway.

Also, speed is a very objective thing, so to say - it very much
depends on the workload.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-23 Thread Jon Nelson
On 12/23/07, maobo [EMAIL PROTECTED] wrote:
 Hi,all

 Yes, I agree some of you. But in my test both using real life trace and
 Iometer test I found that for absolutely read requests, RAID0 is better than
 RAID10 (with same data disks: 3 disks in RAID0, 6 disks in RAID10). I don't
 know why this happen.

 I read the code of RAID10 and RAID0 carefully and experiment with printk to
 track the process flow. The only conclusion I report is the complexity of
 RAID10 to process the read request. While for RAID0 it is so simple that it
 does the read more effectively.

 How do you think about this of absolutely read requests?
 Thank you very much!

My own tests on identical hardware (same mobo, disks, partitions,
everything) and same software, with the only difference being how
mdadm is invoked (the only changes here being level and possibly
layout) show that raid0 is about 15% faster on reads than the very
fast raid10, f2 layout. raid10,f2 is approx. 50% of the write speed of
raid0.

Does this make sense?

-- 
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-23 Thread Richard Scobie

Jon Nelson wrote:


My own tests on identical hardware (same mobo, disks, partitions,
everything) and same software, with the only difference being how
mdadm is invoked (the only changes here being level and possibly
layout) show that raid0 is about 15% faster on reads than the very
fast raid10, f2 layout. raid10,f2 is approx. 50% of the write speed of
raid0.


Have you tested a stacked RAID 10 made up of 2 drive RAID1 arrays, 
striped together into a RAID0.


I have found this configuration to offer very good performance, at the 
cost of slightly more complexity.


Regards,

Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-22 Thread Janek Kozicki
Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)

 Janek Kozicki wrote:
  what's your kernel version? I recall that recently there have been
  some works regarding load balancing.
 
 It was in my original email:
 The kernel is 2.6.23

 Strange I missed the new raid10 development you
 mentioned (I follow linux-raid quite closely).
 What change(s) you're referring to?

oh sorry it was a patch for raid1, not raid10:

  http://www.spinics.net/lists/raid/msg17708.html

I'm wondering if it could be adapted for raid10 ...

Konstantin Sharlaimov said: (by the date of Sat, 03 Nov 2007
20:08:42 +1000)

 This patch adds RAID1 read balancing to device mapper. A read operation
 that is close (in terms of sectors) to a previous read or write goes to 
 the same mirror.
snip

-- 
Janek Kozicki |
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-22 Thread Jon Nelson
On 12/22/07, Janek Kozicki [EMAIL PROTECTED] wrote:
 Michael Tokarev said: (by the date of Fri, 21 Dec 2007 23:56:09 +0300)

  Janek Kozicki wrote:
   what's your kernel version? I recall that recently there have been
   some works regarding load balancing.
 
  It was in my original email:
  The kernel is 2.6.23
 
  Strange I missed the new raid10 development you
  mentioned (I follow linux-raid quite closely).
  What change(s) you're referring to?

 oh sorry it was a patch for raid1, not raid10:

   http://www.spinics.net/lists/raid/msg17708.html

 I'm wondering if it could be adapted for raid10 ...

 Konstantin Sharlaimov said: (by the date of Sat, 03 Nov 2007
 20:08:42 +1000)

  This patch adds RAID1 read balancing to device mapper. A read operation
  that is close (in terms of sectors) to a previous read or write goes to
  the same mirror.

Looking at the source for raid10 it already looks like it does some
read balancing.
For raid10 f2 on a 3 drive raid I've found really impressive
performance numbers - as good as raid0. Write speeds are a bit lower
but rather better than raid5 on the same devices.

-- 
Jon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-21 Thread Michael Tokarev
Michael Tokarev wrote:
 I just noticed that with Linux software RAID10, disk
 usage isn't equal at all, that is, most reads are
 done from the first part of mirror(s) only.
 
 Attached (disk-hour.png) is a little graph demonstrating
 this (please don't blame me for poor choice of colors and
 the like - this stuff is in works right now, it's a first
 rrd graph I produced :).  There's a 14-drive RAID10 array
 and 2 more drives.  In the graph it's clearly visible that
 there are 3 kinds of load for drives, because graphs for
 individual drives are stacked on each other forming 3 sets.
 One set (the 2 remaining drives) isn't interesting, but the
 2 main ones (with many individual lines) are interesting.

Ok, looks like vger.kernel.org dislikes png attachments.
I wont represent graphs as ascii-art, and it's really not
necessary -- see below.

 The 7 drives with higher utilization receives almost all
 reads - the second half of the array only gets reads
 sometimes.  And all 14 drives - obviously - receives
 all writes.
 
 So the picture (modulo that sometimes above which is
 too small to take into account) is like - writes are
 going to all drives, while reads are done from the
 first half of each pair only.
 
 Also attached are two graphs for individual drives,
 one is from first half of the array (diskrq-sdb-hour.png),
 which receives almost all reads (other disks looks
 pretty much the same), and from the second half
 (diskrq-sdl-hour.png), which receives very few
 reads.  The graphs shows number of disk transactions
 per second, separately for reads and writes.

Here's a typical line from iostat -x:

Dev: rrqm/s wrqm/s   r/s  w/s  rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
sdb0,32   0,03 22,16 5,84 2054,79 163,7479,21 0,20  7,29  4,33 12,12
sdk0,38   0,03  6,28 5,84  716,61 163,7472,66 0,15 12,29  5,55  6,72

where sdb and sdk are two halfs of the same raid1 part
of a raid10 array - i.e., the content of the two are
identical.  As shown, write requests are the same for
the two, but read requests mostly goes to sdb (the
first half), and very little to sdk (the second half).

 Should raid10 balance reads too, maybe in a way similar
 to what raid1 does?
 
 The kernel is 2.6.23 but very similar behavior is
 shown by earlier kernels as well.  Raid10 stripe
 size is 256Mb, but again it doesn't really matter
 other sizes behave the same here.

The amount of data is quite large and it is laid out
and accessed pretty much randomly (it's a database
server), so in theory, even with some optimizations
like raid1 does (route request to a drive with nearest
head position), the read request distribution should
be basically the same.

Thanks!

/mjt

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: raid10: unfair disk load?

2007-12-21 Thread Michael Tokarev
Janek Kozicki wrote:
 Michael Tokarev said: (by the date of Fri, 21 Dec 2007 14:53:38 +0300)
 
 I just noticed that with Linux software RAID10, disk
 usage isn't equal at all, that is, most reads are
 done from the first part of mirror(s) only.
 
 what's your kernel version? I recall that recently there have been
 some works regarding load balancing.

It was in my original email:

 The kernel is 2.6.23 but very similar behavior is
 shown by earlier kernels as well.  Raid10 stripe
 size is 256Mb, but again it doesn't really matter
 other sizes behave the same here.

Strange I missed the new raid10 development you
mentioned (I follow linux-raid quite closely).
Lemme see...  no, nothing relevant in 2.6.24-rc5
(compared with 2.6.23), at least git doesn't show
anything interesting.  What change(s) you're
referring to?

Thanks.

/mjt
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html