Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-03 Thread Sander Smeenk
Quoting Greg 'groggy' Lehey ([EMAIL PROTECTED]):

> > But still, I doubt if bonnie++ is a good test, and I have a hard time
> > interpreting the results. I can publish the results somewhere, in a
> > while.
> Yes, bonnie++ tests the entire system, not just the disk.
> Try benchmarks/rawio.

That might explain why bonnie++ results were quite similar on IDE as on
SCSI. I just ran rawio on the plain IDE disk, and the RAID10 vinum
volume, and now results *ARE* 'astonishing' :)

Random read  Sequential read  Random write  Sequential write
ID  K/sec  /sec  K/sec  /sec  K/sec  /sec   K/sec  /sec
IDE  1860.4  72   6799.4 415   1314.9 1585914.3 361
VINUM   12008.3 724  14972.7 914  12072.5 726   13710.0 837

Thanks for all your help Greg!

Sander.
-- 
| Not one shred of evidence supports the notion that life is serious.
| 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8  9BDB D463 7E41 08CE C94D


signature.asc
Description: Digital signature


Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-02 Thread Greg 'groggy' Lehey
On Tuesday,  2 December 2003 at 10:26:08 +0100, Sander Smeenk wrote:
> Quoting Greg 'groggy' Lehey ([EMAIL PROTECTED]):
  plex org striped 3841k
>>
>> You should choose a stripe size which is a multiple of the block size
>> (presumably 16 kB).  Not doing so will have a minor performance
>> impact, but nothing like what you describe here.
>
> I might have misunderstood, but on the vinumvm.org website there is
> quite a comprehensive discussion on stripesizes and they conclude
> that larger stripesizes help increase throughput. They also discuss
> the size not being a power of 2, because that might cause stripes to
> end up on the same disk, which decreases performance...

That's all correct, but unfortunately the discussion isn't
comprehensive enough.  Since transfers tend to be on block boundaries
(but there's no requirement), it's a good idea to have all blocks on a
single platter.  Otherwise transfers can get broken into two, with
minor performance implications.  I'm currently tending towards a
stripe size of 496 kB.

 ahd1: PCI error Interrupt
>> Dump Card State Begins <
 ahd1: Dumping Card State at program address 0x94 Mode 0x22
>> This is possibly related.  Does it happen every time?
>
> It did, until I compiled a new 4.9 kernel from the 4.9-RELEASE src/ tree
> from CVS. (Thanks to Scott Long for pointing that out).
> The driver for aic7xxx cards was fixed, and now the message is gone, and
> the system is once again stable.

Ah, OK.

>> The first thing to do is to find whether it's Vinum or the SCSI disks.
>> Can you test with a single SCSI disk (of the same kind, preferably one
>> of the array) instead of a single IDE disk?
>
> I did some tests, this time with Bonnie++, on vinum, scsi, ide, and
> vinum with big stripes and small stripes. I'm busy comparing them ;)
>
> But still, I doubt if bonnie++ is a good test, and I have a hard time
> interpreting the results. I can publish the results somewhere, in a
> while.

Yes, bonnie++ tests the entire system, not just the disk.  Try
benchmarks/rawio.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-02 Thread Sander Smeenk
Quoting Greg 'groggy' Lehey ([EMAIL PROTECTED]):
> >Then I did a read test, like this:
> >>date;find . -type f|while read FILE;do cat "$FILE" > 
> >>/dev/null;done;date
> >I know it's not the most sophisticated test, but at least it shows that
> >on the IDE disk, this 'test' took 24 seconds to complete. On the RAID10
> >array it took a whopping 73 seconds to complete.
> You shouldn't expect any performance improvement with this kind of
> test, since you still need to access the data, and there's no way to
> do it in parallel.

I knew it wasn't the best way to test, but I did expect it to be at
least as fast as the same test on the same data on a IDE disk. And
luckily, you agree ;)

> >>  plex org striped 3841k
> You should choose a stripe size which is a multiple of the block size
> (presumably 16 kB).  Not doing so will have a minor performance
> impact, but nothing like what you describe here.

I might have misunderstood, but on the vinumvm.org website there is quite
a comprehensive discussion on stripesizes and they conclude that larger
stripesizes help increase throughput. They also discuss the size not
being a power of 2, because that might cause stripes to end up on the
same disk, which decreases performance...

> >>ahd1: PCI error Interrupt
> Dump Card State Begins <
> >>ahd1: Dumping Card State at program address 0x94 Mode 0x22
> This is possibly related.  Does it happen every time?

It did, until I compiled a new 4.9 kernel from the 4.9-RELEASE src/ tree
from CVS. (Thanks to Scott Long for pointing that out).
The driver for aic7xxx cards was fixed, and now the message is gone, and
the system is once again stable.

I didn't notice any real change in performance yet.

> The first thing to do is to find whether it's Vinum or the SCSI disks.
> Can you test with a single SCSI disk (of the same kind, preferably one
> of the array) instead of a single IDE disk?

I did some tests, this time with Bonnie++, on vinum, scsi, ide, and
vinum with big stripes and small stripes. I'm busy comparing them ;)

But still, I doubt if bonnie++ is a good test, and I have a hard time
interpreting the results. I can publish the results somewhere, in a
while.

Thanks,
Sander.
-- 
| Coffee (n.), a person who is coughed upon.
| 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8  9BDB D463 7E41 08CE C94D
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-01 Thread Greg 'groggy' Lehey
On Thursday, 27 November 2003 at 13:44:35 +0100, Sander Smeenk wrote:
> Hi,
>
> I recently installed a Dual P4 2.8ghz with FBSD 4.9 and made a RAID10
> array of the 4 scsi disks available. The idea was that this would be
> faster to read from than normal IDE disks. As a test I took the
> company's web/ directory, which is 1.6gb in size and has 22082 files.
>
> I extracted this web/ directory on the IDE disk and on the RAID10
> array, and noticed that extracting it took much longer on RAID10 than it
> did on IDE. I assumed that it was slower on RAID10 because it needed to
> stripe the data to all these disks, mirror it and what not.
>
> Then I did a read test, like this:
>
>> date;find . -type f|while read FILE;do cat "$FILE" > /dev/null;done;date
>
> I know it's not the most sophisticated test, but at least it shows that
> on the IDE disk, this 'test' took 24 seconds to complete. On the RAID10
> array it took a whopping 73 seconds to complete.

You shouldn't expect any performance improvement with this kind of
test, since you still need to access the data, and there's no way to
do it in parallel.

> I would understand if the RAID10 array was as fast as IDE, or
> faster,

Correct.

> but i'm a bit amazed by these results.

Agreed, I find them surprising too.

> The relevant volume for this in the vinum config looks like this:
>
>> volume varweb setupstate
>>   plex org striped 3841k

You should choose a stripe size which is a multiple of the block size
(presumably 16 kB).  Not doing so will have a minor performance
impact, but nothing like what you describe here.

> What could cause this major decrease in speed?  Or is this normal
> behaviour, and is the RAID array faster with concurrent reads /
> writes than the IDE disk, but not with single reads / writes?

That's true, as I said above, but it doesn't explain the problems.

> As a possible reason for this slowdown the only thing I can find is
> this, from dmesg:
>
> [ .. later on in the boot process .. ]
>
>> ahd1: PCI error Interrupt
 Dump Card State Begins <
>> ahd1: Dumping Card State at program address 0x94 Mode 0x22
>> Card was paused

This is possibly related.  Does it happen every time?

> But the thing is, there's NOTHING connected to ahd1, and the step that
> follows this card dump is detecting disks, which succeeds like a charm.
> All 4 SCSI disks are detected, and show a healthy connection state:
>> da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabled
>
> Further use of this new server did not reveal any other problems with
> the SCSI controller. Everything seems to work as expected. Except for
> the slowdown in reads / writes.
>
> Can anyone shed a light on this matter? Things I overlooked?
> Things I should check? I tried googling for a solution to the PCI
> error interrupt problem which puts the SCSI card in pause, but I
> couldn't find anything useful, just a few posts from people who also
> experience this card dump thing at boot.

The first thing to do is to find whether it's Vinum or the SCSI disks.
Can you test with a single SCSI disk (of the same kind, preferably one
of the array) instead of a single IDE disk?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-01 Thread Sander Smeenk
Quoting Chad Leigh -- Shire.Net LLC ([EMAIL PROTECTED]):
> >I would understand if the RAID10 array was as fast as IDE, or faster,
> >but i'm a bit amazed by these results. The RAID10 array was built on 4x
> >36.7gb Ultra320 SCSI disks, connected to an Adaptec 39320D Ultra320 
> >SCSI adapter, which is a PCI-X card, configured in a PCI-X slot.
> Try creating the same vinum set using 4 ATA100 disks and see what 
> happens when compared to your UeberSCSI vinum set...

Thing is, I don't have 4 of the same sized disks available to try this
out. And the overall results people post on the web is that vinum RAID
is way faster than normal disks, so I am very curious as to what I might
have done wrong...

Thanks,
Sander.
-- 
| Why do they call it "chilli" if it's hot?
| 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8  9BDB D463 7E41 08CE C94D
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Vinum & U320 SCSI, slower than UDMA100 IDE ?

2003-12-01 Thread Chad Leigh -- Shire.Net LLC
On Nov 27, 2003, at 5:44 AM, Sander Smeenk wrote:

Hi,

I recently installed a Dual P4 2.8ghz with FBSD 4.9 and made a RAID10
array of the 4 scsi disks available. The idea was that this would be
faster to read from than normal IDE disks. As a test I took the
company's web/ directory, which is 1.6gb in size and has 22082 files.


I know it's not the most sophisticated test, but at least it shows that
on the IDE disk, this 'test' took 24 seconds to complete. On the RAID10
array it took a whopping 73 seconds to complete.
I would understand if the RAID10 array was as fast as IDE, or faster,
but i'm a bit amazed by these results. The RAID10 array was built on 4x
36.7gb Ultra320 SCSI disks, connected to an Adaptec 39320D Ultra320 
SCSI
adapter, which is a PCI-X card, configured in a PCI-X slot.

Try creating the same vinum set using 4 ATA100 disks and see what 
happens when compared to your UeberSCSI vinum set...

Chad

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"