On Thursday, 27 November 2003 at 13:44:35 +0100, Sander Smeenk wrote:
> Hi,
> I recently installed a Dual P4 2.8ghz with FBSD 4.9 and made a RAID10
> array of the 4 scsi disks available. The idea was that this would be
> faster to read from than normal IDE disks. As a test I took the
> company's web/ directory, which is 1.6gb in size and has 22082 files.
> I extracted this web/ directory on the IDE disk and on the RAID10
> array, and noticed that extracting it took much longer on RAID10 than it
> did on IDE. I assumed that it was slower on RAID10 because it needed to
> stripe the data to all these disks, mirror it and what not.
> Then I did a read test, like this:
>> date;find . -type f|while read FILE;do cat "$FILE" > /dev/null;done;date
> I know it's not the most sophisticated test, but at least it shows that
> on the IDE disk, this 'test' took 24 seconds to complete. On the RAID10
> array it took a whopping 73 seconds to complete.

You shouldn't expect any performance improvement with this kind of
test, since you still need to access the data, and there's no way to
do it in parallel.

> I would understand if the RAID10 array was as fast as IDE, or
> faster,


> but i'm a bit amazed by these results.

Agreed, I find them surprising too.

> The relevant volume for this in the vinum config looks like this:
>> volume varweb setupstate
>>   plex org striped 3841k

You should choose a stripe size which is a multiple of the block size
(presumably 16 kB).  Not doing so will have a minor performance
impact, but nothing like what you describe here.

> What could cause this major decrease in speed?  Or is this normal
> behaviour, and is the RAID array faster with concurrent reads /
> writes than the IDE disk, but not with single reads / writes?

That's true, as I said above, but it doesn't explain the problems.

> As a possible reason for this slowdown the only thing I can find is
> this, from dmesg:
> [ .. later on in the boot process .. ]
>> ahd1: PCI error Interrupt
>>>>>>>>>>>>>>>>>>>> Dump Card State Begins <<<<<<<<<<<<<<<<<
>> ahd1: Dumping Card State at program address 0x94 Mode 0x22
>> Card was paused

This is possibly related.  Does it happen every time?

> But the thing is, there's NOTHING connected to ahd1, and the step that
> follows this card dump is detecting disks, which succeeds like a charm.
> All 4 SCSI disks are detected, and show a healthy connection state:
>> da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabled
> Further use of this new server did not reveal any other problems with
> the SCSI controller. Everything seems to work as expected. Except for
> the slowdown in reads / writes.
> Can anyone shed a light on this matter? Things I overlooked?
> Things I should check? I tried googling for a solution to the PCI
> error interrupt problem which puts the SCSI card in pause, but I
> couldn't find anything useful, just a few posts from people who also
> experience this card dump thing at boot.

The first thing to do is to find whether it's Vinum or the SCSI disks.
Can you test with a single SCSI disk (of the same kind, preferably one
of the array) instead of a single IDE disk?

When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to