Bjorn Eikeland wrote:

Hi

I had four 160G IDE drives in raid5 on a single controller and it worked just fine, never benchmarked it since it was faster than the network anyway. But when adding a second controller card and two more drives the new array has a terrible write performance.

I've tried various stripe and block sizes in desperation but that didnt help. Then I've tried assinging the same irq to both controller card in case it was interrupts causing the slow down, so I set both pci slots to use irq3 in the bios (freebsd wants irq 3 for a non existent sio1 port - so I figure that'll be 'free'?) but despite my setting in the bios the cards still show up in dmesg with irq 21 and 22?

So I looked thourgh the handbook and tried setting the irq in /boot/device.hints both as hint.atapci.x.irq="3" and hint.ata.x.irq="3" but this didnt work either.

The problem is the same in freebsd 5.1 and 5.2 (output below is form 5.2):

home# dmesg | grep atapci
atapci0: <Promise PDC20268 UDMA100 controller> port 0xb000-0xb00f,0xb400-0xb403,0xb800-0xb807,0xd000-0xd003,0xd400-0xd407 mem 0xf9000000-0xf9003fff irq 21 at device 9.0 on pci1
atapci0: [MPSAFE]
ata2: at 0xd400 on atapci0
ata3: at 0xb800 on atapci0
atapci1: <Promise PDC20268 UDMA100 controller> port 0x9400-0x940f,0x9800-0x9803,0xa000-0xa007,0xa400-0xa403,0xa800-0xa807 mem 0xf8800000-0xf8803fff irq 22 at device 10.0 on pci1
atapci1: [MPSAFE]
ata4: at 0xa800 on atapci1
ata5: at 0xa000 on atapci1
atapci2: <Intel ICH2 UDMA100 controller> port 0x8800-0x880f at device 31.1 on pci0
ata0: at 0x1f0 irq 14 on atapci2
ata1: at 0x170 irq 15 on atapci2


Any thoughts anyone?

Bjorn
_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Don't assign the same irq to 2 devices, thats a conflict! You may not have used win95, but you don't won't to do that. I know raid 5 isslower than raid 1, but I don't remember any numbers. Also the more complex you make the system the slower it will go, hince raid 5 slower than raid 1. Also pci is a shared bus meaning 1 device talks at a time, so maybe, just maybe if you chipset has a pci bridge because you have like 8 slots, or the make was real kind, you could try card 1 in say slot 2, and card 2 in slot 6? You maybe able to get simultainsuos writes and reads that way. The best option is pcix, or a controler that support 8 drives on its own. Also try to adjust pci latincy or waiting, do a google on it, I hear 95-128 clicks is good. I just noticed the buillt in ide is on pci0 while the rest are on pci1, theres a bridge you can use. If you can get it setup, have 2 drives connected to the onboard ide and the rest to your card.
Jason


_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to