On Wed, Jan 12, 2000 at 02:43:29AM -0500 James Manning wrote:
> [ Tuesday, January 11, 2000 ] Peter Palfrader aka Weasel wrote:
> > I'm running a plain 2.2.14 but the results are no different than with
> > a 2.2.10 or 2.2.12.
[the results with raid0 were really poor (20m/s over 2 disks with 20m/s each)]
> > Do you have any pointers what might be wrong/what I could try to
> > improve speed?
>
> Hmmm... try using the "new" RAID (0.90) by patching your kernel with

So, I finally found time to try the new RAID stuff and speed
increased :)

RAID 0; Chunksize 32; 
'fresh' ext2 with 
  Block size=4096
  Fragment size=4096
  23 block groups
  32768 blocks per group, 32768 fragments per group
  15904 inodes per group

hdparm: *duck*
/dev/md1:
 Timing buffered disk reads:  64 MB in  1.67 seconds = 38.32 MB/sec

bonnie:
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
          256  2910 47.5 15973 14.9  8954 18.6  3060 47.4 39021 22.2 301.8  3.2

tiobench.pl:
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   25.0967 13.4% 11.6478 11.3%  255.465 1.40%
  .    254    4096    2   16.9046 10.1% 12.2406 12.8%  337.067 1.93%
  .    254    4096    4   19.4572 14.8% 11.7867 11.9%  330.633 0.90%
  .    254    4096    8   24.8936 18.6% 11.6621 11.5%  384.697 1.63%










On Sat, Feb 12, 2000 at 11:20:08AM -0500, James Manning wrote:
> Looks like you'll be a good test case of read balancing in raid1 :)
> Can you try Mika's read-balacing patch?  It'd be good to see the results.

I also tried RAID1 with and without the read-balancing patch:

The filesystem was always made with a simple "mke2fs <dev>":
ext2:
  Block size=4096 (log=2)
  Fragment size=4096 (log=2)
  182784 inodes, 365440 blocks
  18272 blocks (5.00%) reserved for the super user
  First data block=0
  12 block groups
  32768 blocks per group, 32768 fragments per group
  15232 inodes per group


WITHOUT READ-BALANCING
-=-=-=-=-=-=-=-=-=-=-=

/bench-1-04-non-rb
         ^^ = chunksize
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   18.7276 10.8% 5.80362 5.39%  270.426 1.08%
  .    254    4096    2   13.0896 9.37% 6.54882 6.16%  293.264 1.31%
  .    254    4096    4   14.7805 10.1% 6.40525 7.06%  304.113 1.14%
  .    254    4096    8   16.3618 12.9% 6.34798 7.26%  328.900 1.72%


/bench-1-08-non-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   18.8658 10.3% 6.55379 6.14%  266.211 1.53%
  .    254    4096    2   12.9266 8.09% 6.41696 5.96%  292.124 1.46%
  .    254    4096    4   14.8016 10.3% 6.46429 7.02%  305.542 1.22%
  .    254    4096    8   15.7277 11.7% 6.35880 7.53%  320.752 1.92%




WITH READ-BALANCING
-=-=-=-=-=-=-=-=-=-

/bench-1-04-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   19.0380 9.21% 6.51114 5.94%  309.114 1.00%
  .    254    4096    2   37.0789 21.8% 5.74841 5.31%  598.115 2.24%
  .    254    4096    4   35.7327 29.9% 6.42589 6.93%  568.947 2.98%
  .    254    4096    8   31.7289 23.7% 6.31268 6.99%  556.696 2.64%


/bench-1-08-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   18.9102 11.6% 5.99338 5.85%  311.401 1.16%
  .    254    4096    2   37.1013 25.1% 6.57402 6.54%  605.037 3.32%
  .    254    4096    4   27.3852 18.2% 6.45365 6.81%  546.669 2.46%
  .    254    4096    8   32.9959 23.9% 6.37386 7.45%  546.227 2.59%

/bench-1-16-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   19.1062 11.9% 6.42064 5.78%  307.688 1.23%
  .    254    4096    2   36.9642 20.3% 6.55686 6.24%  564.285 3.24%
  .    254    4096    4   36.1421 28.9% 6.53018 7.07%  574.656 2.87%
  .    254    4096    8   32.0291 22.9% 6.44000 7.55%  543.756 2.31%


/bench-1-32-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   19.0513 11.9% 6.59143 6.22%  305.696 0.84%
  .    254    4096    2   37.2681 24.3% 6.27177 6.51%  581.374 2.47%
  .    254    4096    4   36.3541 25.6% 6.49950 6.93%  559.377 2.65%
  .    254    4096    8   32.2174 22.4% 6.43318 7.36%  569.229 2.41%


/bench-1-64-rb
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   19.1788 11.0% 4.57621 4.28%  308.500 0.92%
  .    254    4096    2   37.3805 21.9% 6.21838 6.16%  580.854 2.75%
  .    254    4096    4   27.8880 19.9% 6.50133 7.06%  516.206 2.96%
  .    254    4096    8   31.7893 25.6% 6.42607 7.12%  539.445 2.56%



As you can see, read is nearly as fast as raid0, and the number of
seeks is really nice iff you have more than one thread at a time.

The ext2 block size always was 4k. Perhaps I should've tried with
different ext2 blocksizes too. Can anyone comment on this?

What kind of puzzles me is the poor read performance with chunksizes
of 4 and 64 with 4 threads. I tried it again with a chunksize of 8 and
it is reproduceable:

/bench-1-08-rb-2
 Dir   Size   BlkSz  Thr#  Read (CPU%)   Write (CPU%)   Seeks (CPU%)
----- ------ ------- ---- ------------- -------------- --------------
  .    254    4096    1   19.0800 11.1% 6.62579 6.31%  305.375 0.68%
  .    254    4096    2   37.1297 25.8% 6.13970 6.04%  597.185 2.98%
  .    254    4096    4   27.8519 19.5% 6.46460 6.82%  548.662 2.33%
  .    254    4096    8   32.1535 25.4% 6.35791 7.20%  564.018 2.67%






HW-info:
marvin:~# cat /proc/scsi/scsi 
Attached devices: 
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: IBM      Model: DRVS09V          Rev: 0140
  Type:   Direct-Access                    ANSI SCSI revision: 03
Host: scsi0 Channel: 00 Id: 01 Lun: 00
  Vendor: IBM      Model: DRVS09V          Rev: 0140
  Type:   Direct-Access                    ANSI SCSI revision: 03
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: PLEXTOR  Model: CD-ROM PX-32TS   Rev: 1.03
  Type:   CD-ROM                           ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 01 Lun: 00
  Vendor: TEAC     Model: CD-R55S          Rev: 1.0E
  Type:   CD-ROM                           ANSI SCSI revision: 02
Host: scsi1 Channel: 00 Id: 05 Lun: 00
  Vendor: IOMEGA   Model: ZIP 100 PLUS     Rev: J.66
  Type:   Direct-Access                    ANSI SCSI revision: 02

marvin:~# cat /proc/scsi/aic7xxx/0
Adaptec AIC7xxx driver version: 5.1.21/3.2.4
Compile Options:
  TCQ Enabled By Default : Disabled
  AIC7XXX_PROC_STATS     : Disabled
  AIC7XXX_RESET_DELAY    : 5

Adapter Configuration:
           SCSI Adapter: Adaptec AHA-294X Ultra2 SCSI host adapter
                           Ultra-2 LVD/SE Wide Controller
    PCI MMAPed I/O Base: 0xe1000000
    PCI Bus 0x00 Device 0x50
 Adapter SEEPROM Config: SEEPROM found and used.
      Adaptec SCSI BIOS: Enabled
                    IRQ: 9
                   SCBs: Active 0, Max Active 2,
                         Allocated 15, HW 32, Page 255
             Interrupts: 140628
      BIOS Control Word: 0x18a6
   Adapter Control Word: 0x1c5d
   Extended Translation: Enabled
Disconnect Enable Flags: 0xffff
     Ultra Enable Flags: 0x0000
 Tag Queue Enable Flags: 0x0000
Ordered Queue Tag Flags: 0x0000
Default Tag Queue Depth: 8
    Tagged Queue By Device array for aic7xxx host instance 0:
      {255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255}
    Actual queue depth per device for aic7xxx host instance 0:
      {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}

Statistics:

(scsi0:0:0:0)
  Device using Wide/Sync transfers at 80.0 MByte/sec, offset 15
  Transinfo settings: current(10/15/1/0), goal(10/15/1/0), user(10/127/1/0)
  Total transfers 68137 (24413 reads and 43724 writes)


(scsi0:0:1:0)
  Device using Wide/Sync transfers at 80.0 MByte/sec, offset 15
  Transinfo settings: current(10/15/1/0), goal(10/15/1/0), user(10/127/1/0)
  Total transfers 72314 (28359 reads and 43955 writes)


marvin:~# cat /proc/cpuinfo 
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 5
model name      : Pentium II (Deschutes)
stepping        : 2
cpu MHz         : 400.917973
cache size      : 512 KB
fdiv_bug        : no
hlt_bug         : no
sep_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 2
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 
mmx fxsr
bogomips        : 399.77


raiddev /dev/md1
        raid-level              1         <-- was 0 with raid0 test :)
        nr-raid-disks           2
        persistent-superblock   1
        chunk-size              8         <-- was 4,8,16,32 and 64

        device                  /dev/sda6
        raid-disk               0
        device                  /dev/sdb6
        raid-disk               1

marvin:~# fdisk -l /dev/sda

Disk /dev/sda: 255 heads, 63 sectors, 1116 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1             1        31    248976   82  Linux swap
/dev/sda2            32      1116   8715262+   5  Extended
/dev/sda5            32       213   1461883+  fd  Linux raid autodetect
/dev/sda6           214       395   1461883+  fd  Linux raid autodetect
/dev/sda7           396       638   1951866   fd  Linux raid autodetect
/dev/sda8           639       687    393561   fd  Linux raid autodetect
/dev/sda9           688      1116   3445911   fd  Linux raid autodetect

sdb is the same




Kernel: 2.2.14, patched with raid-2.2.14-B1 and for the rb test with
raid1-readbalance-patch3.gz.




JFYI.

-- 
Weasel                            http://www.cosy.sbg.ac.at/~ppalfrad/
PGP/GPG encrypted messages prefered. See my site or finger -l ppalfrad
----------------------------------------------------------------------
          Yes means No and No means Yes. Delete all files [Y]?

Reply via email to