Re: Strange performance results in RAID5

2001-03-29 Thread Kalvinder Singh

Neil Brown wrote:

 
1/ you didn't say how fast your SCSI buss is.  I guess if it is
reasonably new it would be at least 80Mb/sec which should allow 
500 * 64K/s but it wouldn't have to be too old to not allow that,
and I don't like to assume things that aren't stated.
 

I should of specified that I am using ULTRA160...


2/ You could be being slowed down by the stripe cache - it only
allows 256 concurrent 4k access.   Try increasing NR_STRIPES at the
top of drivers/md/raid5.c - say to 2048.  See if that makes a
difference.
 

This doesn't make any difference for 10 processes. I will increase the 
process number to see what happens.


3/ Also, try applying
 
  
http://www.cse.unsw.edu.au/~neilb/patches/linux/2.4.3-pre6/patch-F-raid5readbypass
 
This patch speeds up large sequential reads, at a possible small cost
to random read-modify-writes (I haven't measured any problems, but
haven't had the time to explore the performance thoroughly).
What it does is read directly into the filesystems buffer instead
of into the stripe cache and then memcpy into the filesys buffer.
 

Haven't tried this yet, when I do I will check the performance difference.


4/ I'm assuming you are doing direct IO to /dev/md0.
Try making a mounting a filesystem of /dev/md0 first. This will
switch the device blocksize to 4K (if you have a 4k block size
filesystem).  The larger block size improves performance
substantially.   I always do I/O tests to a filesystem, not to the
block device, because it makes a difference and it is a filesystem
that I want to use (though I realise that you may not).
 


You are a legend. This did it!!! I am now getting the expected 500 
random reads per second of 64K block data segments!!!

If you are ever up on the Gold Coast, give me a call, and I will buy you 
a beer, or two, or three

I am just really happy...:

Cheers,
Kal.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]



Strange performance results in RAID5

2001-03-28 Thread Kalvinder Singh

Hi,

I have been doing some performance checks on my RAID 5 system.

The system is

5 Seagate Cheetahs X15
Linux 2.4.2

I am using IOtest 3.0 on /dev/md0
My chunk size is 1M...

When I do random reads of 64K blobs using one process, I get 100 
reads/sec, which is the same as doing random reads on one disk. So I was 
quite happy with that.

My next test was to do random reads using ten processes, I expected 500 
reads/sec, however, I only got 250 reads/sec.

This to me doesn't seem right??? Does anyone know why this is the case?

BTW, I also decided to do 512 byte reads, and I do get figures of 500 
reads/sec (however for one process I did get 200 reads/sec).

Cheers,
Kal.

p.s. If you think there is something wrong with IOtest 3.0 then please 
say so...

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]



Re: Strange performance results in RAID5

2001-03-28 Thread Neil Brown

On Thursday March 29, [EMAIL PROTECTED] wrote:
 Hi,
 
 I have been doing some performance checks on my RAID 5 system.

Good.

 
 The system is
 
 5 Seagate Cheetahs X15
 Linux 2.4.2
 
 I am using IOtest 3.0 on /dev/md0
 My chunk size is 1M...
 
 When I do random reads of 64K blobs using one process, I get 100 
 reads/sec, which is the same as doing random reads on one disk. So I was 
 quite happy with that.
 
 My next test was to do random reads using ten processes, I expected 500 
 reads/sec, however, I only got 250 reads/sec.
 
 This to me doesn't seem right??? Does anyone know why this is the
 case?

A few possibilities:

   1/ you didn't say how fast your SCSI buss is.  I guess if it is
   reasonably new it would be at least 80Mb/sec which should allow 
   500 * 64K/s but it wouldn't have to be too old to not allow that,
   and I don't like to assume things that aren't stated.

   2/ You could be being slowed down by the stripe cache - it only
   allows 256 concurrent 4k access.   Try increasing NR_STRIPES at the
   top of drivers/md/raid5.c - say to 2048.  See if that makes a
   difference.

   3/ Also, try applying

 http://www.cse.unsw.edu.au/~neilb/patches/linux/2.4.3-pre6/patch-F-raid5readbypass

   This patch speeds up large sequential reads, at a possible small cost
   to random read-modify-writes (I haven't measured any problems, but
   haven't had the time to explore the performance thoroughly).
   What it does is read directly into the filesystems buffer instead
   of into the stripe cache and then memcpy into the filesys buffer.

   4/ I'm assuming you are doing direct IO to /dev/md0.
   Try making a mounting a filesystem of /dev/md0 first. This will
   switch the device blocksize to 4K (if you have a 4k block size
   filesystem).  The larger block size improves performance
   substantially.   I always do I/O tests to a filesystem, not to the
   block device, because it makes a difference and it is a filesystem
   that I want to use (though I realise that you may not).

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]