3-4MB/s vinum performance with two Promise PDC20268 UDMA100 controllers

2004-01-15 Thread Bjorn Eikeland
Hi

I had four 160G IDE drives in raid5 on a single controller and it worked 
just fine, never benchmarked it since it was faster than the network 
anyway. But when adding a second controller card and two more drives the 
new array has a terrible write performance.

I've tried various stripe and block sizes in desperation but that didnt 
help. Then I've tried assinging the same irq to both controller card in 
case it was interrupts causing the slow down, so I set both pci slots to 
use irq3 in the bios (freebsd wants irq 3 for a non existent sio1 port - 
so I figure that'll be 'free'?) but despite my setting in the bios the 
cards still show up in dmesg with irq 21 and 22?

So I looked thourgh the handbook and tried setting the irq in 
/boot/device.hints both as  hint.atapci.x.irq=3 and hint.ata.x.irq=3 
but this didnt work either.

The problem is the same in freebsd 5.1 and 5.2 (output below is form 5.2):

home# dmesg | grep atapci
atapci0: Promise PDC20268 UDMA100 controller port 
0xb000-0xb00f,0xb400-0xb403,0xb800-0xb807,0xd000-0xd003,0xd400-0xd407 mem 
0xf900-0xf9003fff irq 21 at device 9.0 on pci1
atapci0: [MPSAFE]
ata2: at 0xd400 on atapci0
ata3: at 0xb800 on atapci0
atapci1: Promise PDC20268 UDMA100 controller port 
0x9400-0x940f,0x9800-0x9803,0xa000-0xa007,0xa400-0xa403,0xa800-0xa807 mem 
0xf880-0xf8803fff irq 22 at device 10.0 on pci1
atapci1: [MPSAFE]
ata4: at 0xa800 on atapci1
ata5: at 0xa000 on atapci1
atapci2: Intel ICH2 UDMA100 controller port 0x8800-0x880f at device 31.1 
on pci0
ata0: at 0x1f0 irq 14 on atapci2
ata1: at 0x170 irq 15 on atapci2

Any thoughts anyone?

Bjorn
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 3-4MB/s vinum performance with two Promise PDC20268 UDMA100 controllers

2004-01-15 Thread jason
Bjorn Eikeland wrote:

Hi

I had four 160G IDE drives in raid5 on a single controller and it 
worked just fine, never benchmarked it since it was faster than the 
network anyway. But when adding a second controller card and two more 
drives the new array has a terrible write performance.

I've tried various stripe and block sizes in desperation but that 
didnt help. Then I've tried assinging the same irq to both controller 
card in case it was interrupts causing the slow down, so I set both 
pci slots to use irq3 in the bios (freebsd wants irq 3 for a non 
existent sio1 port - so I figure that'll be 'free'?) but despite my 
setting in the bios the cards still show up in dmesg with irq 21 and 22?

So I looked thourgh the handbook and tried setting the irq in 
/boot/device.hints both as  hint.atapci.x.irq=3 and 
hint.ata.x.irq=3 but this didnt work either.

The problem is the same in freebsd 5.1 and 5.2 (output below is form 
5.2):

home# dmesg | grep atapci
atapci0: Promise PDC20268 UDMA100 controller port 
0xb000-0xb00f,0xb400-0xb403,0xb800-0xb807,0xd000-0xd003,0xd400-0xd407 
mem 0xf900-0xf9003fff irq 21 at device 9.0 on pci1
atapci0: [MPSAFE]
ata2: at 0xd400 on atapci0
ata3: at 0xb800 on atapci0
atapci1: Promise PDC20268 UDMA100 controller port 
0x9400-0x940f,0x9800-0x9803,0xa000-0xa007,0xa400-0xa403,0xa800-0xa807 
mem 0xf880-0xf8803fff irq 22 at device 10.0 on pci1
atapci1: [MPSAFE]
ata4: at 0xa800 on atapci1
ata5: at 0xa000 on atapci1
atapci2: Intel ICH2 UDMA100 controller port 0x8800-0x880f at device 
31.1 on pci0
ata0: at 0x1f0 irq 14 on atapci2
ata1: at 0x170 irq 15 on atapci2

Any thoughts anyone?

Bjorn
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
[EMAIL PROTECTED]

Don't assign the same irq to 2 devices, thats a conflict!  You may not 
have used win95, but you don't won't to do that.  I know raid 5 isslower 
than raid 1, but I don't remember any numbers.  Also the more complex 
you make the system the slower it will go, hince raid 5 slower than raid 
1.  Also pci is a shared bus meaning 1 device talks at a time, so maybe, 
just maybe if you chipset has a pci bridge because you have like 8 
slots, or the make was real kind, you could try card 1 in say slot 2, 
and card 2 in slot 6?  You maybe able to get simultainsuos writes and 
reads that way.  The best option is pcix, or a controler that support 8 
drives on its own.  Also try to adjust pci latincy or waiting, do a 
google on it, I hear 95-128 clicks is good.  I just noticed the buillt 
in ide is on pci0 while the rest are on pci1, theres a bridge you can 
use.  If you can get it setup, have 2 drives connected to the onboard 
ide and the rest to your card.
Jason

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: 3-4MB/s vinum performance with two Promise PDC20268 UDMA100 controllers

2004-01-15 Thread Greg 'groggy' Lehey
On Thursday, 15 January 2004 at 10:44:20 +0100, Bjorn Eikeland wrote:
 Hi

 I had four 160G IDE drives in raid5 on a single controller and it worked
 just fine, never benchmarked it since it was faster than the network
 anyway. But when adding a second controller card and two more drives the
 new array has a terrible write performance.
 
 ...

 Any thoughts anyone?

How about some information about your configuration and how you
measured the performance?

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers.


pgp0.pgp
Description: PGP signature


Re: 3-4MB/s vinum performance with two Promise PDC20268 UDMA100 controllers

2004-01-15 Thread Bjorn Eikeland
På Thu, 15 Jan 2004 09:43:01 -0200, skrev jason [EMAIL PROTECTED]:

Bjorn Eikeland wrote:

Hi

I had four 160G IDE drives in raid5 on a single controller and it 
worked just fine, never benchmarked it since it was faster than the 
network anyway. But when adding a second controller card and two more 
drives the new array has a terrible write performance.

I've tried various stripe and block sizes in desperation but that didnt 
help. Then I've tried assinging the same irq to both controller card in 
case it was interrupts causing the slow down, so I set both pci slots 
to use irq3 in the bios (freebsd wants irq 3 for a non existent sio1 
port - so I figure that'll be 'free'?) but despite my setting in the 
bios the cards still show up in dmesg with irq 21 and 22?

So I looked thourgh the handbook and tried setting the irq in 
/boot/device.hints both as  hint.atapci.x.irq=3 and 
hint.ata.x.irq=3 but this didnt work either.

The problem is the same in freebsd 5.1 and 5.2 (output below is form 
5.2):

home# dmesg | grep atapci
atapci0: Promise PDC20268 UDMA100 controller port 
0xb000-0xb00f,0xb400-0xb403,0xb800-0xb807,0xd000-0xd003,0xd400-0xd407 
mem 0xf900-0xf9003fff irq 21 at device 9.0 on pci1
atapci0: [MPSAFE]
ata2: at 0xd400 on atapci0
ata3: at 0xb800 on atapci0
atapci1: Promise PDC20268 UDMA100 controller port 
0x9400-0x940f,0x9800-0x9803,0xa000-0xa007,0xa400-0xa403,0xa800-0xa807 
mem 0xf880-0xf8803fff irq 22 at device 10.0 on pci1
atapci1: [MPSAFE]
ata4: at 0xa800 on atapci1
ata5: at 0xa000 on atapci1
atapci2: Intel ICH2 UDMA100 controller port 0x8800-0x880f at device 
31.1 on pci0
ata0: at 0x1f0 irq 14 on atapci2
ata1: at 0x170 irq 15 on atapci2

Any thoughts anyone?

Bjorn
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to 
[EMAIL PROTECTED]

Don't assign the same irq to 2 devices, thats a conflict!  You may not 
have used win95, but you don't won't to do that.  I know raid 5 isslower 
than raid 1, but I don't remember any numbers.  Also the more complex 
you make the system the slower it will go, hince raid 5 slower than raid 
1.  Also pci is a shared bus meaning 1 device talks at a time, so maybe, 
just maybe if you chipset has a pci bridge because you have like 8 
slots, or the make was real kind, you could try card 1 in say slot 2, 
and card 2 in slot 6?  You maybe able to get simultainsuos writes and 
reads that way.  The best option is pcix, or a controler that support 8 
drives on its own.  Also try to adjust pci latincy or waiting, do a 
google on it, I hear 95-128 clicks is good.  I just noticed the buillt 
in ide is on pci0 while the rest are on pci1, theres a bridge you can 
use.  If you can get it setup, have 2 drives connected to the onboard 
ide and the rest to your card.
Jason
Thank you for your thoughts Jason! (Your penny is in the mail ;)

About the irq thing I think I read (while reading up on bridges) that 
interrupts were level triggered (as apposed to edge triggered) and thus 
two NICs sharing a interrupt and asserting it at the same time would only 
cause one context switch - so I figured it worth a try.

As for the pci bus etc, all three pci slots are pci1 - and the secondary 
onboard ide channel gives me read and write errors (the same type as for a 
bad udma100 cable - but im sure its the controller as the cable and drives 
work fine elsewhere). I've found some info on pci latency but will try it 
tomorrow as the box is headless - just made some refrence measurements 
tonight.

But I acutally remembered that the previos setup wasn't raid5 - the four 
first drives were striped since it was a temporary arrangement while 
waiting for the 2nd controller - however I do have a linux machine at home 
with 4 of the same drives (all on the onboard UDMA100 controller) running 
raid5 and it does perform better (cant do any mesurements now, but it does 
accept data at about 60Mbps over a smb share and i think the client maxed 
out at that).

I've done some measurements on the drives with different setups - the 
results were quite long so I've posted it on a web page instad 
http://www.eikeland.info/bjorn/archive/040117vinumperf1.txt (Test was dd 
count=1 bs=65536 if=/dev/zero of=/dev/vinum/test)

Briefly summarized: any raid5 (with 3, 4 or 6 drives) writes ~4M/s and 
reads 27M, 29M and 32M/s respectly. A single drive reads and writes 
~40M/s. Raid0 (4 and 6 drives) writes ~50M and reads 50M and 62M/s 
respectively. Test done to from /dev/zero with 625M or 512M test file.

Is this really what write performance I can expect from a raid5 array usch 
as this? I knew it wouldnt be writing way fast, but I was quite sure it 
would accept what the 100Mbit network had to offer?

(Should I maybe move this over to freebsd-performance?)

-Bjorn
___
[EMAIL PROTECTED] mailing list

Vinum performance

2003-08-19 Thread Niklas Saers Mailinglistaccount
Hi all,
I was wondering if anyone can tell me about their experience of Vinum and
its performance on RAID5 systems? I've built a vinum-raid of 4 160Gb disks
that each do a dd if=/dev/zero of=/dev/adNd of about 60Mb/sec. However,
when I've set up my raid, performance drops to 6Mb/sec, and with only
three disks it drops to 4.5Mb/sec. Is this usual? Because with this speed
I cannot run the system at all. Any good way of tuning the system?

Cheers

   Nik
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum performance

2003-08-19 Thread Greg 'groggy' Lehey
On Tuesday, 19 August 2003 at  9:56:13 +0200, Niklas Saers Mailinglistaccount wrote:
 Hi all,
 I was wondering if anyone can tell me about their experience of Vinum and
 its performance on RAID5 systems? I've built a vinum-raid of 4 160Gb disks
 that each do a dd if=/dev/zero of=/dev/adNd of about 60Mb/sec. However,
 when I've set up my raid, performance drops to 6Mb/sec, and with only
 three disks it drops to 4.5Mb/sec. Is this usual? 

No.  You should expect about ¼ the write performance of a single disk
under these rather contrived circumstances.

 Because with this speed I cannot run the system at all.

Have you built this system for writing single sectors to a volume
using dd?

 Any good way of tuning the system?

There are some ideas in the man page and the web site.  But in
general, RAID-5 isn't designed for heavy write accesses.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers


pgp0.pgp
Description: PGP signature


Re: Vinum performance

2003-06-03 Thread Jaye Mathisen
Hmmm, I thought the consensus was to use a weird stripe size to avoid
getting all the inode/superblock stuff on 1 disk.

I seem to recall somebody saying somethingabout using stripe sizes
like 273k and such...

On Fri, May 30, 2003 at 07:48:45PM +0930, Greg 'groggy' Lehey wrote:
 On Friday, 30 May 2003 at 11:16:06 +0200, Per olof Ljungmark wrote:
  Using Vinum on 4.7-RELEASE-p10, I wonder what can be done to optimize
  performance. So far I am not impressed but perhaps I did not configure
  Vinum optimal, grateful for any hints thanks.
 
 You haven't said what your problem is.  It definitely depends on your
 application (which may simply be the way you measure it).
 
 FWIW, the stripe size should be a multiple of the file system block
 size.  Yes, the man pages don't necessarily say that, but it's also
 not so important.
 
 Greg
 --
 When replying to this message, please copy the original recipients.
 If you don't, I may ignore the reply or reply to the original recipients.
 For more information, see http://www.lemis.com/questions.html
 See complete headers for address and phone numbers


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum performance

2003-06-03 Thread Greg 'groggy' Lehey
On Monday,  2 June 2003 at 17:48:23 -0700, Jaye Mathisen wrote:
 On Fri, May 30, 2003 at 07:48:45PM +0930, Greg 'groggy' Lehey wrote:
 On Friday, 30 May 2003 at 11:16:06 +0200, Per olof Ljungmark wrote:
 Using Vinum on 4.7-RELEASE-p10, I wonder what can be done to optimize
 performance. So far I am not impressed but perhaps I did not configure
 Vinum optimal, grateful for any hints thanks.

 You haven't said what your problem is.  It definitely depends on your
 application (which may simply be the way you measure it).

 FWIW, the stripe size should be a multiple of the file system block
 size.  Yes, the man pages don't necessarily say that, but it's also
 not so important.

 Hmmm, I thought the consensus was to use a weird stripe size to avoid
 getting all the inode/superblock stuff on 1 disk.

Correct, for some definition of weird.

 I seem to recall somebody saying somethingabout using stripe sizes
 like 273k and such...

Yes, I once said that.  Then it occurred to me that many transfers are
complete file system blocks.  If you have a stripe size which isn't a
multiple of the block size, you'll end up with more transfers split
across two devices, which has a negative effect on performance.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers


pgp0.pgp
Description: PGP signature


Re: Vinum performance

2003-05-31 Thread Chuck Swiger
Per olof Ljungmark wrote:
[ ... ]
The bottleneck is write performance, reading from the documentation this 
is normally a weak point in Vinum/raid5 setups.
It's not vinum; RAID-5 write performance is going to be relatively slow even 
with hardware parity/XOR support.

The volume will be used for storing temporary files with sizes in the 
1-4MB range, a few hundreds at a time. Reads are fine but writes a bit 
slow.
What ratio of reads to writes do you expect?

Copying 170 files total 319MB TO the Vinum volume takes about 2m45s.
Copying same files FROM the Vinum volume to another volume on a hardware 
raid5 controller takes 43s.
A 3-1 ratio in speeds for software RAID-5 versus hardware doesn't strike me as 
being very wrong

This machine is not in production yet so I can still make changes to the 
configuration. The current block size is 16384 and the stripe size 419k. 
I assume it would be a good idea to change that to for example 491,520?
...or try a few stripe sizes, such as 64K or 128K.

-Chuck

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum performance

2003-05-31 Thread Per olof Ljungmark
What ratio of reads to writes do you expect?
Had no expectations really, I'm new to Vinum.

Copying 170 files total 319MB TO the Vinum volume takes about 2m45s.
Copying same files FROM the Vinum volume to another volume on a 
hardware raid5 controller takes 43s.


A 3-1 ratio in speeds for software RAID-5 versus hardware doesn't strike 
me as being very wrong

This machine is not in production yet so I can still make changes to 
the configuration. The current block size is 16384 and the stripe size 
419k. I assume it would be a good idea to change that to for example 
491,520?


...or try a few stripe sizes, such as 64K or 128K.

Chuck,

Thanks for your comments, I felt it was a good idea just to check before 
we start to use this piece seriously then I will not be able to touch it 
for changing configuration. Will try different stripe sizes as per your 
advice.

/per olof

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum performance

2003-05-31 Thread Drew Tomlinson
- Original Message - 
From: Per olof Ljungmark [EMAIL PROTECTED]
To: Chuck Swiger [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, May 30, 2003 8:41 AM


  What ratio of reads to writes do you expect?

 Had no expectations really, I'm new to Vinum.

  Copying 170 files total 319MB TO the Vinum volume takes about 2m45s.
  Copying same files FROM the Vinum volume to another volume on a
  hardware raid5 controller takes 43s.
 
 
  A 3-1 ratio in speeds for software RAID-5 versus hardware doesn't strike
  me as being very wrong
 
  This machine is not in production yet so I can still make changes to
  the configuration. The current block size is 16384 and the stripe size
  419k. I assume it would be a good idea to change that to for example
  491,520?
 
 
  ...or try a few stripe sizes, such as 64K or 128K.

FWIW - The man page suggests avoiding powers of 2 and a minimum 128K strip
size.

Cheers,

Drew

 Thanks for your comments, I felt it was a good idea just to check before
 we start to use this piece seriously then I will not be able to touch it
 for changing configuration. Will try different stripe sizes as per your
 advice.

 /per olof

 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to
[EMAIL PROTECTED]



___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Vinum performance

2003-05-30 Thread Greg 'groggy' Lehey
On Friday, 30 May 2003 at 11:16:06 +0200, Per olof Ljungmark wrote:
 Using Vinum on 4.7-RELEASE-p10, I wonder what can be done to optimize
 performance. So far I am not impressed but perhaps I did not configure
 Vinum optimal, grateful for any hints thanks.

You haven't said what your problem is.  It definitely depends on your
application (which may simply be the way you measure it).

FWIW, the stripe size should be a multiple of the file system block
size.  Yes, the man pages don't necessarily say that, but it's also
not so important.

Greg
--
When replying to this message, please copy the original recipients.
If you don't, I may ignore the reply or reply to the original recipients.
For more information, see http://www.lemis.com/questions.html
See complete headers for address and phone numbers


pgp0.pgp
Description: PGP signature


Re: Vinum performance

2003-05-30 Thread Per olof Ljungmark
Greg 'groggy' Lehey wrote:
On Friday, 30 May 2003 at 11:16:06 +0200, Per olof Ljungmark wrote:

Using Vinum on 4.7-RELEASE-p10, I wonder what can be done to optimize
performance. So far I am not impressed but perhaps I did not configure
Vinum optimal, grateful for any hints thanks.


You haven't said what your problem is.  It definitely depends on your
application (which may simply be the way you measure it).
FWIW, the stripe size should be a multiple of the file system block
size.  Yes, the man pages don't necessarily say that, but it's also
not so important.
Sorry for the minimal info given in initial post -

The bottleneck is write performance, reading from the documentation this 
is normally a weak point in Vinum/raid5 setups.

The volume will be used for storing temporary files with sizes in the 
1-4MB range, a few hundreds at a time. Reads are fine but writes a bit slow.

Copying 170 files total 319MB TO the Vinum volume takes about 2m45s.
Copying same files FROM the Vinum volume to another volume on a hardware 
raid5 controller takes 43s.

This machine is not in production yet so I can still make changes to the 
configuration. The current block size is 16384 and the stripe size 419k. 
I assume it would be a good idea to change that to for example 491,520?

If you could please confirm if I could look for further improvements or 
if the performance is about what one should expect.

Many thanks,
Per olof
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]