MOLNAR Ingo <[EMAIL PROTECTED]> wrote:

>  On Sun, 24 Jan 1999, Evgeny Stambulchik wrote:
>  
>  > Software: kernel 2.0.36, raidtools-0.42 (I've also tried
>  > raidtools-19981214-0.90 with raid0145-19990108 patch, with practically
>  > no performance improvement). 
>  
>  > md0 : active raid0 sdb3 sdc3 sdd3 3084288 blocks 256k chunks
>                                                     ^^^^
>  
>  could you change this to eg. 4k and retest? ... (important, for 4k to work
>  reasonably you'll need the newest RAID patch) 

OK, this is with the latest snapshots:
# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid5] 
read_ahead 1024 sectors
md0 : active raid0 sdd3[2] sdc3[1] sdb3[0] 3084480 blocks 4k chunks
unused devices: <none>

# bonnie -s 800 
-> on a fs prepared with -b 4096 -R stride=1; same if stride=64:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
          800 13423 98.3 35409 43.8 12304 33.6  8152 93.0 27801 32.0 161.8  2.0

# iozone 800:
        37617076 bytes/second for writing the file
        24251540 bytes/second for reading the file

So, the read performance is better than that with the 256k chunk size (thanks!)
but still worse than it should be, IMO.

BTW, the fs block size seems to affect mainly the write performance:
-> mke2fs with no options, i.e. 1kB bsize:
# bonnie -s 800

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
          800  8688 86.5 15436 59.8  6716 44.9  7536 95.4 26849 36.9 163.1  2.0

# iozone 800:
        15006454 bytes/second for writing the file
        20841262 bytes/second for reading the file

Also (this probably should be sent to another mailing list), here is the
statisitics of (one of the) raid disks:

(scsi1:0:6:0)
  Device using Wide/Sync transfers at
  40.0 MByte/sec, offset 8
    Device Negotiation Settings
        Period Offset Bus Width
User       012    008        1
Goal       012    008        1
Current    012    008        1
    Total transfers 178257 (131290 read;46967 written)
      blks(512) rd=4440491; blks(512) wr=4379888
        < 512 512-1K   1-2K   2-4K   4-8K  8-16K 16-32K 32-64K 64-128K >128K
 Reads:     0      1      1      0  35299  13939  81746     49    255      0
Writes:     0      0      0      0  11233    385   1868   3181  30300      0

It seems to me strange, that reads are mainly done in small chunks, while writes
- in large ones; and why ">128K" are never used? Probably, this explains why
reads are slower than writes?

...
BTW, a very annoying "feature":
# cat /proc/mdstat|grep md0
md0 : active raid0 sdd3[2] sdc3[1] sdb3[0] 3084480 blocks 4k chunks
# raidstop /dev/md0
# raidstart /dev/md0
# cat /proc/mdstat|grep md0
md0 : active raid1 sdd3[2] sdc3[1] sdb3[0] 1028096 blocks [3/3] [UUU]
             ^^^^^
so I need to run mkraid --really-force after every reboot.

# raidstart --version         
raidstart v0.3d compiled for md raidtools-0.90

# cat /etc/raidtab
raiddev /dev/md0
    raid-level                0
    nr-raid-disks             3
    nr-spare-disks            0
    chunk-size                4

    device                    /dev/sdb3
    raid-disk                 0
    device                    /dev/sdc3
    raid-disk                 1
    device                    /dev/sdd3
    raid-disk                 2

Did I miss something in the docs?

Regards,

Evgeny


--
   ____________________________________________________________
  / Evgeny Stambulchik  <[EMAIL PROTECTED]>  \
 /  Plasma Laboratory, Weizmann Institute of Science, Israel \  \
 |  Phone : (972)8-934-3610  == | == FAX   : (972)8-934-3491 |  |
 |  URL   :    http://plasma-gate.weizmann.ac.il/~fnevgeny/  |  |
 |  Finger for PGP key >=====================================+  | 
 |______________________________________________________________| 

Reply via email to