Before I waste a lot of time learning the SCSI layer and the QLogic
2100 device driver, I thought I would query this list for some
information regarding SCSI transactions.  I'm part of a project at
Lawrence Livermore National Laboratory where we are porting a HPC
solution from a commercial vendor to Linux.  Part of this is to
provide a very high I/O rate ( >= 150MB/s for one I/O node on the
cluster).  I am in the middle of testing the Ciprico 7000 JBOD
attached to a QLogic 2200 Fibre Channel controller.  I have applied
the SGI Raw I/O patch to the 2.2.14 kernel running on a Compaq ES40
(quad Alpha 500MHz, dual 64bit 33MHz PCI busses) to run some
benchmarks on Raw I/O.  One characteristic of these Ciprico devices is
they want to see a large block size per SCSI request.  On SGI systems
that we have here at the lab, the optimal block size for these devices
is between 1MB to 2MB.  With the QLogic 2200 and the Ciprico 7000
attached to an SGI running IRIX, the throughput with a 1MB block size
for reads and writes is between 95-98MB/s.  Now, what I was seeing
with Linux using a 1MB block size is around 60MB/s.  This is about
same as the maximum throughput on the SGI machine with a 256KB block
size.  After some more tests, I found that by using the command:

        dd if=/dev/zero of=/dev/rsdd2 bs=256k count=1

The number of requests serviced increases by one when I cat
/proc/scsi/qla2x00/2.  When I use the command:

        dd if=/dev/zero of=/dev/rsdd2 bs=512k count=1

The number of requests serviced increases by two.  If I increase the
block size to 1024KB, the requests serviced increases by four.  It is
obvious the block is getting split up into 256KB chunks.  My question
is, where does the request get split?


BAPper

-
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to [EMAIL PROTECTED]

Reply via email to