Re: Speed Problems

2007-10-03 Thread Tony Sarendal
New set of tests done with AMD64 UP kernel.

http://www.layer17.net/openbsd-router-intro.html

/Tony



Re: Speed Problems

2007-10-03 Thread Daniel Ouellet

Tony Sarendal wrote:



On 10/3/07, *Daniel Ouellet* <[EMAIL PROTECTED] 
> wrote:


Claudio Jeker wrote:
 > Could you add the dmesg of the test box to the website?
 > Do you have any other network cards you could test? (I'm mostly
interested
 > in bnx but sk, msk, bge and nfe could be interesting as well).

This box if the M2 version also come with nfe cards as well, but there
is issue with it at the moment. dmesg available:

http://cvs.openbsd.org/cgi-bin/query-pr-wrapper?full=yes&numbers=5587



Dmesg's are on the site now.
http://www.layer17.net/openbsd-test-setup.html 



Note that the box actually has 8Gigs of memory.

Since I'm off-site I had to get someone else to powercycle the box for me
to wake up the nfe I use as management interface, so the MP dmesg is
from the logs.

Running with the SP kernel the nfe's seem to work ok.


You can't manually fix the option on that card and if you do:

ifconfig -m nfe0

You will see the option for:
media 1000baseSX
media 1000baseSX mediaopt full-duplex

Witch are obviously wrong.

Also, some issue with the AMD64 mp kernel, make the box crash when you 
push a lots of traffic to it.


Lots of comment in archive and tests as well. The i386 looks ok so far, 
except the nfe still bad no matter what, however the AMD64 is not really 
stable and if you put the ACPI on, well...



I'm running the same set of tests with the SP kernel right now.
The 64 byte frames issue in the throughput/latency test looks to be 
gone... cross fingers...


I have 4 of these and still sadly haven't put any in production yet 
because of various stability issue with them.


So, I wouldn't put it as a router right now, but YMMV I guess.

Test well before you do.

Daniel



Re: Speed Problems

2007-10-03 Thread Tony Sarendal
On 10/3/07, Daniel Ouellet <[EMAIL PROTECTED]> wrote:
>
> Claudio Jeker wrote:
> > Could you add the dmesg of the test box to the website?
> > Do you have any other network cards you could test? (I'm mostly
> interested
> > in bnx but sk, msk, bge and nfe could be interesting as well).
>
> This box if the M2 version also come with nfe cards as well, but there
> is issue with it at the moment. dmesg available:
>
> http://cvs.openbsd.org/cgi-bin/query-pr-wrapper?full=yes&numbers=5587


Dmesg's are on the site now.
http://www.layer17.net/openbsd-test-setup.html

Note that the box actually has 8Gigs of memory.

Since I'm off-site I had to get someone else to powercycle the box for me
to wake up the nfe I use as management interface, so the MP dmesg is
from the logs.

Running with the SP kernel the nfe's seem to work ok.

I'm running the same set of tests with the SP kernel right now.
The 64 byte frames issue in the throughput/latency test looks to be gone...
cross fingers...

/Tony




Daniel



Re: Speed Problems

2007-10-03 Thread Daniel Ouellet

Claudio Jeker wrote:

Could you add the dmesg of the test box to the website?
Do you have any other network cards you could test? (I'm mostly interested
in bnx but sk, msk, bge and nfe could be interesting as well).


This box if the M2 version also come with nfe cards as well, but there 
is issue with it at the moment. dmesg available:


http://cvs.openbsd.org/cgi-bin/query-pr-wrapper?full=yes&numbers=5587

Daniel



Re: Speed Problems

2007-10-03 Thread Tony Sarendal
On 10/3/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:
>
> On Tue, Oct 02, 2007 at 08:46:43PM +0100, Tony Sarendal wrote:
> > On 9/27/07, Tony Sarendal <[EMAIL PROTECTED]> wrote:
> > >
> > > On 9/27/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:
>
> ...
>
> >
> > I hooked up the X4100 to one of our testers and ran some basic tests
> just to
> > get
> > familiar with the tester.
> >
> > I put up the results of the first run of tests on
> > http://www.layer17.net/openbsd-router-intro.html
> >
> > All opinions are welcome, please be gentle.
> >
> > I hope to be able to test the 1k vlan interface firewall setup later,
> > I just need to baseline a bit first.
> >
>
> Quite interesting numbers. I guess that em(4) does still to many pci
> read/write accesses and so 64byte packet storms are mostly limited by the
> PCI bus access delay. My gut feeling is that the TX path is causing the
> slow down (enqueing happens on a per packet basis and that is porbably not
> optimal for current high speed cards).
> Could you add the dmesg of the test box to the website?
> Do you have any other network cards you could test? (I'm mostly interested
> in bnx but sk, msk, bge and nfe could be interesting as well).


I'll put up the dmesg when I'm in the office again.
The nfe port I do management over has jammed, a little bios tweaking
might fix that.

The only cards I have access to at the moment are the builtins,
2xem and 2xnfe.

The packet drops of 64 byte frames in the throughput/latency test is a bit
confusing, I can't see that behaviour if I slowly ramp up from 1kpps.
Before I do tests with more advanced config I want the basic ones to give
a result I understand so I'll try to figure that one out.

/Tony



Re: Speed Problems

2007-10-03 Thread Claudio Jeker
On Tue, Oct 02, 2007 at 08:46:43PM +0100, Tony Sarendal wrote:
> On 9/27/07, Tony Sarendal <[EMAIL PROTECTED]> wrote:
> >
> > On 9/27/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:

...

> 
> I hooked up the X4100 to one of our testers and ran some basic tests just to
> get
> familiar with the tester.
> 
> I put up the results of the first run of tests on
> http://www.layer17.net/openbsd-router-intro.html
> 
> All opinions are welcome, please be gentle.
> 
> I hope to be able to test the 1k vlan interface firewall setup later,
> I just need to baseline a bit first.
> 

Quite interesting numbers. I guess that em(4) does still to many pci
read/write accesses and so 64byte packet storms are mostly limited by the
PCI bus access delay. My gut feeling is that the TX path is causing the
slow down (enqueing happens on a per packet basis and that is porbably not
optimal for current high speed cards).
Could you add the dmesg of the test box to the website?
Do you have any other network cards you could test? (I'm mostly interested
in bnx but sk, msk, bge and nfe could be interesting as well).

-- 
:wq Claudio



Re: Speed Problems

2007-10-02 Thread Tony Sarendal
On 9/27/07, Tony Sarendal <[EMAIL PROTECTED]> wrote:
>
> On 9/27/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:
>
> > On Thu, Sep 27, 2007 at 09:54:00AM +0100, Tony Sarendal wrote:
> > > On 9/27/07, Henning Brauer <[EMAIL PROTECTED]> wrote:
> > > >
> > > > * Tony Sarendal < [EMAIL PROTECTED]> [2007-09-27 10:36]:
> > > > > On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
> > > > > > > net.inet.ip.ifq.maxlen defines how many packets can be queued
> > in the
> > > > IP
> > > > > > > input queue before further packets are dropped. Packets
> > comming from
> > > > the
> > > > > > > network card are first put into this queue and the actuall IP
> > packet
> > > > > > > processing is done later. Gigabit cards with interrupt
> > mitigation
> > > > may
> > > > > > spit
> > > > > > > out many packets per interrupt plus heavy use of pf can
> > slowdown the
> > > > > > > packet forwarding. So it is possible that a heavy burst of
> > packets
> > > > is
> > > > > > > overflowing this queue. On the other hand you do not want to
> > use a
> > > > too
> > > > > > big
> > > > > > > number because this has negative effects on the system
> > (livelock
> > > > etc).
> > > > > > > 256 seems to be a better default then the 50 but additional
> > tweaking
> > > > may
> > > > > > > allow you to process a few packets more.
> > > > > > Thanks Claudio...
> > > > > > In the link that Stuart posted here, Henning mentions 256 times
> > the
> > > > > > number of interfaces:
> > > > > > http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
> > > > > Is that per physical or per logical interface  ?
> > > >
> > > > it is a rule of thumb. an approximation. for typical cases.
> > > >
> > > > > [EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
> > > > > 4094
> > > >
> > > > that is not a typical case.
> > > > you do not wanna set your ifqlen to 1048064 :)
> > > >
> > > > the highest qlen I have is somewhere around 2500.
> > > > where the high watermark is... I cannot really say. I'd be careful
> > > > going far higher than the above.
> > >
> > >
> > >
> > > I meant if the input queue length was per physical or logical
> > interface.
> > > There are places where I actually need boxes with more than 1k vlan
> > > subinterfaces.
> > > If net.inet.ip.ifq.maxlen is per logical interface I see some
> > potentional
> > > issues under load.
> > >
> >
> > Henning's hint of 256 * num of interfaces is for physical interfaces.
> > The virtual interfaces will just see a subset of the packets comming
> > from
> > the real ones and so they can be ignored in that rule of thumb.
> >
> > Do you have systems with 1000 and more interfaces in production?
> > Any performance issues? Many interface related operations are O(N).
> > Fixing this is another item on my network stack todo list -- as usual
> > feel
> > free to send me diffs :)
>
>
> It's still in design/test phase. I'm going to use an Ixia tester and an
> X4100
> if I find the time to test it, this is a little pet project of my own.
> If I get that far I'll let you know.
>
> /Tony
>

I hooked up the X4100 to one of our testers and ran some basic tests just to
get
familiar with the tester.

I put up the results of the first run of tests on
http://www.layer17.net/openbsd-router-intro.html

All opinions are welcome, please be gentle.

I hope to be able to test the 1k vlan interface firewall setup later,
I just need to baseline a bit first.

/Tony



Re: Speed Problems Part 2

2007-10-01 Thread rezidue
I decided to pump up maxlen to 8192 to see what would happen and I thought
it actually has stopped the drops.  Unfortunately I was under the impression
they had stopped when I believe this was causing the count to not increase:

WARNING: mclpool limit reached; increase kern.maxclusters

I've pumped up kern.maxclusters to about 2.5x of it's original value and my
drops have begun to increment again along with pf congestion which seems to
go hand in hand.

net.inet.ip.ifq.len=0
net.inet.ip.ifq.maxlen=8192
net.inet.ip.ifq.drops=1566435

I'd going to double again and I'll report my finding shortly.  Again though
, this seems over excessive.



On 10/1/07, rezidue <[EMAIL PROTECTED]> wrote:
>
>
>
> I've now got both of my edge routers running 4.1 and there is definitely a
> speed improvement over all.  Unfortunately I can't seem to get drops to stop
> occurring and after hitting a traffic peak for the past three hours it looks
> like this:
>
> net.inet.ip.ifq.len=0
> net.inet.ip.ifq.maxlen=4096
> net.inet.ip.ifq.drops=1289636



Re: Speed Problems Part 2

2007-10-01 Thread rezidue
On 9/26/07, Stuart Henderson <[EMAIL PROTECTED]> wrote:
>
> On 2007/09/26 13:50, rezidue wrote:
> >
> > >Order a 4.2 CD and install it as soon as you get it. 4.2 removed
> many
> > >bottlenecks in the network stack. In the meanwhile check out for
> the ip
> > >ifq len:
> > ># sysctl net.inet.ip.ifq
> > >net.inet.ip.ifq.len=0
> > >net.inet.ip.ifq.maxlen=256
> > >net.inet.ip.ifq.drops=0
> >
> > >I bet your drops are non 0 and the maxlen is to small (256 is a
> better
> > >value for gigabit firewalls/routers).
> > >--
> > >:wq Claudio
> >
> > I've gone through the 4.1 and 4.2 changes in hopes I would find some
> clear
> > reason as to why I'm having these issues but I've not seen anything.
>
> At the last hackathon, there was a lot of work done on profiling and
> optimizing the path through the network stack/PF; you'll see more about
> this at http://www.openbsd.org/papers/cuug2007/mgp00012.html (and the
> following pages).
>
> > What exactly is this queue?  The odd thing is that I report a negative
> > value for drops and it's counting down.
>
> The -ve is because it's a signed integer and has, on your system,
> exceeded the maximum value since bootup..
>
> > net.inet.ip.ifq.drops=-1381027346
> > I've put maxlen=256 and it seems to have slowed the count down.
>
> You might like to try bumping it up until it stops increasing (uh,
> decreasing. :-) And re-investigate when you get 4.2 (or make any other
> changes to the system).
>
>
I've now got both of my edge routers running 4.1 and there is definitely a
speed improvement over all.  Unfortunately I can't seem to get drops to stop
occurring and after hitting a traffic peak for the past three hours it looks
like this:

net.inet.ip.ifq.len=0
net.inet.ip.ifq.maxlen=4096
net.inet.ip.ifq.drops=1289636

About 100k existed before I managed to get drops to stop over the weekend
with maxlen=2048 but that's only a small portion of the total count now.
I'm afraid to raise maxlen but I'm tempted to see what value I would need to
get this to stop.  The box is peaking at about 180mbps, 30-40k pps.  I still
have plenty of resources available.  In top I've only seen interrupts on
cpu0 and it gets between 30-35 and goes back and forth to 0%.

Here is a dmesg:

 revision 1.0
uhub0 at usb0
uhub0: AMD OHCI root hub, rev 1.00/1.00, addr 1
uhub0: 3 ports with 3 removable, self powered
ohci1 at pci1 dev 0 function 1 "AMD 8111 USB" rev 0x0b: irq 9, version 1.0,
legacy support
usb1 at ohci1: USB revision 1.0
uhpb1 at usb1
uhub1: AMD OHCI root hub, rev 1.00/1.00, addr 1
uhub1: 3 ports with 3 removable, self powered
pciide0 at pci1 dev 5 function 0 "CMD Technology SiI3114 SATA" rev 0x02: DMA
pciide0: using irq 10 for native-PCI interrupt
pciide0: port 0: device present, speed: 1.5Gb/s
wd0 at pciide0 channel 0 drive 0: 
wd0: 16-sector PIO  LBA48, 238475MB, 488397168 sectors
wd0(pciide0:0:0): using BIOS timings, Ultra-DMA mode 6
pciide0: port 1: device present, speed: 1.5Gb/s
wd1 at pciide0 channel 1 drive 0: 
wd1: 16-sector PIO, LBA48, 238475MB, 488397168 sectors
wd1(pciide0:1:0): using BIOS timings, Ultra-DMA mode 6
vga1 at pci1 dev 6 function 0 "ATI Rage XL" rev 0x27
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
"AMD 8111 LPC" rev 0x05 at pci0 dev 7 function 0 not configured
pciide1 at pci0 dev 7 function 1 "AMD 8111 IDE" rev 0x03: DMA, channel 0
configured to compatibility, channel 1 configured to compatibility
atapiscsi0 at pciide1 channel 0 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0:  SCSI0 5/cdrom
removable
cd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide1: channel 1 disabled (no drives)
"AMD 8111 SMBus" rev 0x02 at pci0 dev 7 function 2 not configured
"AMD 8111 Power" rev 0x05 at pci0 dev 7 function 3 not configured
ppb1 at pci0 dev 10 function 0 "AMD 8131 PCIX" rev 0x12
pci2 at ppb1 bus 2
bge0 at pci2 dev 9 function 0 "Broadcom BCM5704C" rev 0x03, BCM5704 A3
(0x2003): irq 5, address 00:e0:81:40:bd:8e
brgphy0 at bge0 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0
bge1 at pci2 dev 9 function 1 "Broadcom BCM5704C" rev 0x03, BCM5704 A3
(0x2003): irq 10, address 00:e0:81:40:bd:8f
brgphy1 at bge1 phy 1: BCD5704 10/100/1000baseT PHY, rev. 0
"AMD 8131 PCIX IOAPIC" rev 0x01 at pci0 dev 10 function 1 not configured
ppb2 at pci0 dev 11 function 0 "AMD 8131 PCIX" rev 0x12
pci3 at ppb2 bus 1
"AMD 8131 PCIX IOAPIC" rev 0x01 at pci0 dev 11 function 1 not configured
pchb0 at pci0 dev 24 function 0 "AMD AMD64 HyperTransport" rev 0x00
pchb1 at pci0 dev 24 function 1 "AMD AMD64 Address Map" rev 0x00
pchb2 at pci0 dev 24 functian 2 "AMD AMD64 DRAM Cfg" rev 0x00
pchb3 at pci0 dev 24 function 3 "AMD AMD64 Misc Cfg" rev 0x00
pchb4 at pci0 dev 25 function 0 "AMD AMD64 HyperTransport" rev 0x00
pchb5 at pci0 dev 25 function 1 "AMD AMD64 Address Map" rev 0x00
pchb6 at pci0 dev 25 function 2 "AMD AMD64 DRAM Cfg" rev 0x00
pchb7 at pci0 dev 25 function 3 "AMD AMD64 Misc Cfg" rev 0x00
isa0 at mainbus0
com0 at isa0 port

Re: Speed Problems

2007-09-27 Thread Tony Sarendal
On 9/27/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:
>
> On Thu, Sep 27, 2007 at 09:54:00AM +0100, Tony Sarendal wrote:
> > On 9/27/07, Henning Brauer <[EMAIL PROTECTED]> wrote:
> > >
> > > * Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:36]:
> > > > On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
> > > > > > net.inet.ip.ifq.maxlen defines how many packets can be queued in
> the
> > > IP
> > > > > > input queue before further packets are dropped. Packets comming
> from
> > > the
> > > > > > network card are first put into this queue and the actuall IP
> packet
> > > > > > processing is done later. Gigabit cards with interrupt
> mitigation
> > > may
> > > > > spit
> > > > > > out many packets per interrupt plus heavy use of pf can slowdown
> the
> > > > > > packet forwarding. So it is possible that a heavy burst of
> packets
> > > is
> > > > > > overflowing this queue. On the other hand you do not want to use
> a
> > > too
> > > > > big
> > > > > > number because this has negative effects on the system (livelock
> > > etc).
> > > > > > 256 seems to be a better default then the 50 but additional
> tweaking
> > > may
> > > > > > allow you to process a few packets more.
> > > > > Thanks Claudio...
> > > > > In the link that Stuart posted here, Henning mentions 256 times
> the
> > > > > number of interfaces:
> > > > > http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
> > > > Is that per physical or per logical interface  ?
> > >
> > > it is a rule of thumb. an approximation. for typical cases.
> > >
> > > > [EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
> > > > 4094
> > >
> > > that is not a typical case.
> > > you do not wanna set your ifqlen to 1048064 :)
> > >
> > > the highest qlen I have is somewhere around 2500.
> > > where the high watermark is... I cannot really say. I'd be careful
> > > going far higher than the above.
> >
> >
> >
> > I meant if the input queue length was per physical or logical interface.
> > There are places where I actually need boxes with more than 1k vlan
> > subinterfaces.
> > If net.inet.ip.ifq.maxlen is per logical interface I see some
> potentional
> > issues under load.
> >
>
> Henning's hint of 256 * num of interfaces is for physical interfaces.
> The virtual interfaces will just see a subset of the packets comming from
> the real ones and so they can be ignored in that rule of thumb.
>
> Do you have systems with 1000 and more interfaces in production?
> Any performance issues? Many interface related operations are O(N).
> Fixing this is another item on my network stack todo list -- as usual feel
> free to send me diffs :)


It's still in design/test phase. I'm going to use an Ixia tester and an
X4100
if I find the time to test it, this is a little pet project of my own.
If I get that far I'll let you know.

/Tony



Re: Speed Problems

2007-09-27 Thread Claudio Jeker
On Thu, Sep 27, 2007 at 09:54:00AM +0100, Tony Sarendal wrote:
> On 9/27/07, Henning Brauer <[EMAIL PROTECTED]> wrote:
> >
> > * Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:36]:
> > > On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
> > > > > net.inet.ip.ifq.maxlen defines how many packets can be queued in the
> > IP
> > > > > input queue before further packets are dropped. Packets comming from
> > the
> > > > > network card are first put into this queue and the actuall IP packet
> > > > > processing is done later. Gigabit cards with interrupt mitigation
> > may
> > > > spit
> > > > > out many packets per interrupt plus heavy use of pf can slowdown the
> > > > > packet forwarding. So it is possible that a heavy burst of packets
> > is
> > > > > overflowing this queue. On the other hand you do not want to use a
> > too
> > > > big
> > > > > number because this has negative effects on the system (livelock
> > etc).
> > > > > 256 seems to be a better default then the 50 but additional tweaking
> > may
> > > > > allow you to process a few packets more.
> > > > Thanks Claudio...
> > > > In the link that Stuart posted here, Henning mentions 256 times the
> > > > number of interfaces:
> > > > http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
> > > Is that per physical or per logical interface  ?
> >
> > it is a rule of thumb. an approximation. for typical cases.
> >
> > > [EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
> > > 4094
> >
> > that is not a typical case.
> > you do not wanna set your ifqlen to 1048064 :)
> >
> > the highest qlen I have is somewhere around 2500.
> > where the high watermark is... I cannot really say. I'd be careful
> > going far higher than the above.
> 
> 
> 
> I meant if the input queue length was per physical or logical interface.
> There are places where I actually need boxes with more than 1k vlan
> subinterfaces.
> If net.inet.ip.ifq.maxlen is per logical interface I see some potentional
> issues under load.
> 

Henning's hint of 256 * num of interfaces is for physical interfaces.
The virtual interfaces will just see a subset of the packets comming from
the real ones and so they can be ignored in that rule of thumb.

Do you have systems with 1000 and more interfaces in production?
Any performance issues? Many interface related operations are O(N).
Fixing this is another item on my network stack todo list -- as usual feel
free to send me diffs :)

-- 
:wq Claudio



Re: Speed Problems

2007-09-27 Thread Tony Sarendal
On 9/27/07, Henning Brauer <[EMAIL PROTECTED]> wrote:
>
> * Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:59]:
> > I meant if the input queue length was per physical or logical interface.
>
> neither. there is one per protocol. i. e. typically two (inet and
> inet6).


Very good. My preconfigured firewalls with 4k interfaces, urpf and
stateless rules may actually work in live conditions then.

I'll see if I can hit it with a tester to see what performance I get.

/Tony



Re: Speed Problems

2007-09-27 Thread Henning Brauer
* Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:59]:
> I meant if the input queue length was per physical or logical interface.

neither. there is one per protocol. i. e. typically two (inet and 
inet6).

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: Speed Problems

2007-09-27 Thread Tony Sarendal
On 9/27/07, Henning Brauer <[EMAIL PROTECTED]> wrote:
>
> * Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:36]:
> > On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
> > > > net.inet.ip.ifq.maxlen defines how many packets can be queued in the
> IP
> > > > input queue before further packets are dropped. Packets comming from
> the
> > > > network card are first put into this queue and the actuall IP packet
> > > > processing is done later. Gigabit cards with interrupt mitigation
> may
> > > spit
> > > > out many packets per interrupt plus heavy use of pf can slowdown the
> > > > packet forwarding. So it is possible that a heavy burst of packets
> is
> > > > overflowing this queue. On the other hand you do not want to use a
> too
> > > big
> > > > number because this has negative effects on the system (livelock
> etc).
> > > > 256 seems to be a better default then the 50 but additional tweaking
> may
> > > > allow you to process a few packets more.
> > > Thanks Claudio...
> > > In the link that Stuart posted here, Henning mentions 256 times the
> > > number of interfaces:
> > > http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
> > Is that per physical or per logical interface  ?
>
> it is a rule of thumb. an approximation. for typical cases.
>
> > [EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
> > 4094
>
> that is not a typical case.
> you do not wanna set your ifqlen to 1048064 :)
>
> the highest qlen I have is somewhere around 2500.
> where the high watermark is... I cannot really say. I'd be careful
> going far higher than the above.



I meant if the input queue length was per physical or logical interface.
There are places where I actually need boxes with more than 1k vlan
subinterfaces.
If net.inet.ip.ifq.maxlen is per logical interface I see some potentional
issues under load.

/Tony



Re: Speed Problems

2007-09-27 Thread Henning Brauer
* Tony Sarendal <[EMAIL PROTECTED]> [2007-09-27 10:36]:
> On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
> > > net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
> > > input queue before further packets are dropped. Packets comming from the
> > > network card are first put into this queue and the actuall IP packet
> > > processing is done later. Gigabit cards with interrupt mitigation may
> > spit
> > > out many packets per interrupt plus heavy use of pf can slowdown the
> > > packet forwarding. So it is possible that a heavy burst of packets is
> > > overflowing this queue. On the other hand you do not want to use a too
> > big
> > > number because this has negative effects on the system (livelock etc).
> > > 256 seems to be a better default then the 50 but additional tweaking may
> > > allow you to process a few packets more.
> > Thanks Claudio...
> > In the link that Stuart posted here, Henning mentions 256 times the
> > number of interfaces:
> > http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
> Is that per physical or per logical interface  ?

it is a rule of thumb. an approximation. for typical cases.

> [EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
> 4094

that is not a typical case.
you do not wanna set your ifqlen to 1048064 :)

the highest qlen I have is somewhere around 2500.
where the high watermark is... I cannot really say. I'd be careful 
going far higher than the above.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: Speed Problems

2007-09-27 Thread Tony Sarendal
On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
>
> > net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
> > input queue before further packets are dropped. Packets comming from the
> > network card are first put into this queue and the actuall IP packet
> > processing is done later. Gigabit cards with interrupt mitigation may
> spit
> > out many packets per interrupt plus heavy use of pf can slowdown the
> > packet forwarding. So it is possible that a heavy burst of packets is
> > overflowing this queue. On the other hand you do not want to use a too
> big
> > number because this has negative effects on the system (livelock etc).
> > 256 seems to be a better default then the 50 but additional tweaking may
> > allow you to process a few packets more.
>
> Thanks Claudio...
>
> In the link that Stuart posted here, Henning mentions 256 times the
> number of interfaces:
> http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666


Is that per physical or per logical interface  ?

[EMAIL PROTECTED] ifconfig -a | grep ^vlan | wc -l
4094
[EMAIL PROTECTED]

/Tony



Re: Speed Problems Part 2

2007-09-26 Thread Stuart Henderson
On 2007/09/26 13:50, rezidue wrote:
> 
> >Order a 4.2 CD and install it as soon as you get it. 4.2 removed many
> >bottlenecks in the network stack. In the meanwhile check out for the ip
> >ifq len:
> ># sysctl net.inet.ip.ifq
> >net.inet.ip.ifq.len=0
> >net.inet.ip.ifq.maxlen=256
> >net.inet.ip.ifq.drops=0
> 
> >I bet your drops are non 0 and the maxlen is to small (256 is a better
> >value for gigabit firewalls/routers).
> >--
> >:wq Claudio
> 
> I've gone through the 4.1 and 4.2 changes in hopes I would find some clear
> reason as to why I'm having these issues but I've not seen anything.

At the last hackathon, there was a lot of work done on profiling and
optimizing the path through the network stack/PF; you'll see more about
this at http://www.openbsd.org/papers/cuug2007/mgp00012.html (and the
following pages).

> What exactly is this queue?  The odd thing is that I report a negative
> value for drops and it's counting down.

The -ve is because it's a signed integer and has, on your system,
exceeded the maximum value since bootup..

> net.inet.ip.ifq.drops=-1381027346
> I've put maxlen=256 and it seems to have slowed the count down.

You might like to try bumping it up until it stops increasing (uh,
decreasing. :-) And re-investigate when you get 4.2 (or make any other
changes to the system).



Re: Speed Problems Part 2

2007-09-26 Thread Tobias Weingartner
rezidue wrote:
>  kern.version=OpenBSD 4.0-stable (GENERIC.MP) #0: Thu Mar 15 07:28:19 CST

Just for the hell of it, try running GENERIC, instead of GENERIC.MP.

--Toby.



Speed Problems Part 2

2007-09-26 Thread rezidue
For some reason I can't seem to reply to the earlier responses.  Hopefully
this gets through.

On 9/26/07, Bryan Irvine < [EMAIL PROTECTED]> wrote:

>What have you looked at? are you running pf? what kind of ruleset?
>   Tried simplifying it?
>
>--Bryan

I wasn't running pf originally when I noticed this problem but I am now just
to block ssh from the outside.  I've disabled and re-enabled pf to see if it
affects throughput and it's not, or isn't that noticeable.  As for what I
have done I have performed a number of bandwidth tests.  I've come from the
outside, traversing the gateway while downloading from an internal host.
I've come from the outside to the gateway downloading from it, I've come
from the local subnet on a machine running the exact same hardware and
installation while transferring a file in each direction.  While under high
load all forms of this testing is affected with poor speeds.  Even when not
under high loads I never see the speeds I should.  I've checked interface
stats on the switch and have found no errors.  I have run iperf and can only
seem to get 5-16Mb/s.  I even bumped up sendspace and recvspace to help with
edge host to host transfer but I've not seen any improvement.  I'm going to
be tinkering with netperf more because I'm not sure if I ran into an issue
on bsd with it.  On two linux boxes on the inside it reports line speed
between them.

On 9/26/07, Maxim Belooussov < [EMAIL PROTECTED]> wrote:

>Hi,
>   The first thing to do is to check the cable :)
>
>   And the second thing to do is to check the entire chain. Maybe it's
>   not you, but the other end who cannot handle the load.
>
>   Max

Cables don't show any problems and I have the problems internally as well,
not just external hosts.  I wish it was that simple.

On 9/26/07, Claudio Jeker <[EMAIL PROTECTED]> wrote:

>Order a 4.2 CD and install it as soon as you get it. 4.2 removed many
>bottlenecks in the network stack. In the meanwhile check out for the ip
>ifq len:
># sysctl net.inet.ip.ifq
>net.inet.ip.ifq.len=0
>net.inet.ip.ifq.maxlen=256
>net.inet.ip.ifq.drops=0

>I bet your drops are non 0 and the maxlen is to small (256 is a better
>value for gigabit firewalls/routers).
>--
>:wq Claudio

I've gone through the 4.1 and 4.2 changes in hopes I would find some clear
reason as to why I'm having these issues but I've not seen anything.  What
exactly is this queue?  The odd thing is that I report a negative value for
drops and it's counting down.

net.inet.ip.ifq.drops=-1381027346

I've put maxlen=256 and it seems to have slowed the count down.

On 9/26/07, Stuart Henderson <[EMAIL PROTECTED]> wrote:

>dmesg and vmstat -i might give clues. Also try bsd.mp if you use
>bsd (or vice-versa), and Claudio's suggestion of 4.2 is a good one.

Dmesg has not shown any issues.  I've been a bit confused with how to
interpret the output of vmstat and "systat vmstat".  I was told to look for
interrupts on "systat vmstat" but I haven't seen any being thrown while
under heavy load.  As for "vmstat -i", I'm not exactly sure what would
signify a problem but I get the following output:

Gateway1 (about 3-4 times the load of gateway2)
interrupt   total rate
irq0/clock 6455328221  399
irq0/ipi   2543041813  157
irq19/ohci0  91660
irq17/pciide0 76302290
irq0/bge0 25346022947 1570
irq1/bge1 21123330824 1308
Total 55475363200 3437

Gateway2:
interrupt   total rate
irq0/clock 6455272059  400
irq0/ipi   1819715207  112
irq19/ohci0 125740
irq17/pciide0 62321130
irq0/bge0  8118898045  503
irq1/bge1 12291117020  761
Total 28691247018 1777

Here is my sysctl -a output:

kern.ostype=OpenBSD
kern.osrelease=4.0
kern.osrevision=200611
kern.version=OpenBSD 4.0-stable (GENERIC.MP) #0: Thu Mar 15 07:28:19 CST
2007
[EMAIL PROTECTED]
:/usr/src/sys/arch/amd64/compile/GENERIC.MP

kern.maxvnodes=1310
kern.maxproc=532
kern.maxfiles=1772
kern.argmax=262144
kern.securelevel=1
kern.hostname=dyno1.nothingtoseehere.com
kern.hostid=0
kern.clockrate=tick = 1, tickadj = 40, hz = 100, profhz = 100, stathz =
100
kern.posix1version=199009
kern.ngroups=16
kern.job_control=1
kern.saved_ids=1
kern.boottime=Fri Mar 23 06:44:05 2007
kern.domainname=
kern.maxpartitions=16
kern.rawpartition=2
kern.osversion=GENERIC.MP#0
kern.somaxconn=128
kern.sominconn=80
kern.usermount=0
kern.random=160901082016 47373568 0 502891828 23135 5922320 0 0 0 0 0 0
22063035075 935474146 14935755619 48820374348 1984945954 2097660952
3949423372 384190080 606887773 1054912573 2101170714 1709697072 1531324571
891699911 1726356236 407933168 707207288 1237035834 37928905 5295362
1570

Re: Speed Problems

2007-09-26 Thread rezidue
Hopefully this makes it through , I've been trying to post comments all day
but they don't seem to make it here.

To Bryan, I wasn't running pf originally when I noticed this problem but I
am now just to block ssh from the outside.  I've disabled and re-enabled pf
to see if it affects throughput and it's not, or isn't that noticeable.  As
for what I have done I have performed a number of bandwidth tests.  I've
come from the outside, traversing the gateway while downloading from an
internal host.  I've come from the outside to the gateway downloading from
it, I've come from the local subnet on a machine running the exact same
hardware and installation while transferring a file in each direction.
While under high load all forms of this testing is affected with poor
speeds.  Even when not under high loads I never see the speeds I should.
I've checked interface stats on the switch and have found no errors.  I have
run iperf and can only seem to get 5-16Mb/s.  I even bumped up sendspace and
recvspace to help with edge host to host transfer but I've not seen any
improvement.  I'm going to be tinkering with netperf more because I'm not
sure if I ran into an issue on bsd with it.  On two linux boxes on the
inside it reports line speed between them.

To Max, Cables don't show any problems and I have the problems internally as
well, not just external hosts.  I wish it was that simple.

To Claudio, I've gone through the 4.1 and 4.2 changes in hopes I would find
some clear reason as to why I'm having these issues but I've not seen
anything. The odd thing is that I report a negative value for drops and it's
counting down.

net.inet.ip.ifq.drops=-1381027346

I've put maxlen=256 and it seems to have slowed the count down.


To Stuart, Dmesg has not shown any issues.  I've been a bit confused with
how to interpret the output of vmstat and "systat vmstat".  I was told to
look for interrupts on "systat vmstat" but I haven't seen any being thrown
while under heavy load.  I also don't think I understand how interrupts
work.  As for "vmstat -i", I'm not exactly sure what would signify a problem
but I get the following output:

Gateway1 (about 3-4 times the load of gateway2)
interrupt   total rate
irq0/clock 6455328221  399
irq0/ipi   2543041813  157
irq19/ohci0  91660
irq17/pciide0 76302290
irq0/bge0 25346022947 1570
irq1/bge1 21123330824 1308
Total 55475363200 3437

Gateway2:
interrupt   total rate
irq0/clock 6455272059  400
irq0/ipi   1819715207  112
irq19/ohci0 125740
irq17/pciide0 62321130
irq0/bge0  8118898045  503
irq1/bge1 12291117020  761
Total 28691247018 1777



On 9/26/07, Tom Bombadil <[EMAIL PROTECTED]> wrote:
>
> > net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
> > input queue before further packets are dropped. Packets comming from the
> > network card are first put into this queue and the actuall IP packet
> > processing is done later. Gigabit cards with interrupt mitigation may
> spit
> > out many packets per interrupt plus heavy use of pf can slowdown the
> > packet forwarding. So it is possible that a heavy burst of packets is
> > overflowing this queue. On the other hand you do not want to use a too
> big
> > number because this has negative effects on the system (livelock etc).
> > 256 seems to be a better default then the 50 but additional tweaking may
> > allow you to process a few packets more.
>
> Thanks Claudio...
>
> In the link that Stuart posted here, Henning mentions 256 times the
> number of interfaces:
> http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666
>
> I'll try both and see.
>
> Thanks you and Stuart for the hints.



Re: Speed Problems

2007-09-26 Thread Tom Bombadil
> net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
> input queue before further packets are dropped. Packets comming from the
> network card are first put into this queue and the actuall IP packet
> processing is done later. Gigabit cards with interrupt mitigation may spit
> out many packets per interrupt plus heavy use of pf can slowdown the
> packet forwarding. So it is possible that a heavy burst of packets is
> overflowing this queue. On the other hand you do not want to use a too big
> number because this has negative effects on the system (livelock etc).
> 256 seems to be a better default then the 50 but additional tweaking may
> allow you to process a few packets more.

Thanks Claudio...

In the link that Stuart posted here, Henning mentions 256 times the
number of interfaces:
http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666

I'll try both and see.

Thanks you and Stuart for the hints.



Re: Speed Problems

2007-09-26 Thread Stuart Henderson
On 2007/09/26 10:48, Tom Bombadil wrote:
> What does 'net.inet.ip.ifq.maxlen=256' do for us?

try http://archive.openbsd.nu/?ml=openbsd-tech&a=2006-10&t=2474666



Re: Speed Problems

2007-09-26 Thread Claudio Jeker
On Wed, Sep 26, 2007 at 10:48:02AM -0700, Tom Bombadil wrote:
> Hi Claudio...
> 
> What does 'net.inet.ip.ifq.maxlen=256' do for us?
> 
> Tried a few 'man', and a few google searches and I wasn't very
> successful. Found tons of other posts telling ppl to bump up that
> sysctl, but never found what it does exactly.
> 

net.inet.ip.ifq.maxlen defines how many packets can be queued in the IP
input queue before further packets are dropped. Packets comming from the
network card are first put into this queue and the actuall IP packet
processing is done later. Gigabit cards with interrupt mitigation may spit
out many packets per interrupt plus heavy use of pf can slowdown the
packet forwarding. So it is possible that a heavy burst of packets is
overflowing this queue. On the other hand you do not want to use a too big
number because this has negative effects on the system (livelock etc).
256 seems to be a better default then the 50 but additional tweaking may
allow you to process a few packets more.

-- 
:wq Claudio



Re: Speed Problems

2007-09-26 Thread Tom Bombadil
Hi Claudio...

What does 'net.inet.ip.ifq.maxlen=256' do for us?

Tried a few 'man', and a few google searches and I wasn't very
successful. Found tons of other posts telling ppl to bump up that
sysctl, but never found what it does exactly.

Cheers,
g.



Re: Speed Problems

2007-09-26 Thread Stuart Henderson
On 2007/09/25 23:57, rezidue wrote:
> I've been having problems with throughput on a box I'm using as an edge
> gateway.

dmesg and vmstat -i might give clues. Also try bsd.mp if you use
bsd (or vice-versa), and Claudio's suggestion of 4.2 is a good one.



Re: Speed Problems

2007-09-26 Thread Claudio Jeker
On Tue, Sep 25, 2007 at 11:57:37PM -0500, rezidue wrote:
> I've been having problems with throughput on a box I'm using as an edge
> gateway.  I can't seem to get it to push out more than 150Mb/sec at about
> 20k pps.  It's a Tyan Thunder K8SR (S2881) board that has two gig broadcom
> interfaces on a shared pci-x bus.  It's on the bcm5704c chipset and I'm
> running OpenBSD 4.0.  The machine has two dual core amd opteron chips and
> two gigs of ram.  Barley any resources are being used when we are peaking
> during the day.  When we hit around 140+Mb/sec I start seeing packet loss
> and when I copy a file from this machine via scp to another host over the
> gig lan I can see that it directly affects throughput.  I've spent all day
> trying to find the problem but I've had no luck.  Any ideas?  Any info I can
> provide?
> 

Order a 4.2 CD and install it as soon as you get it. 4.2 removed many
bottlenecks in the network stack. In the meanwhile check out for the ip
ifq len:
# sysctl net.inet.ip.ifq
net.inet.ip.ifq.len=0
net.inet.ip.ifq.maxlen=256
net.inet.ip.ifq.drops=0

I bet your drops are non 0 and the maxlen is to small (256 is a better
value for gigabit firewalls/routers).
-- 
:wq Claudio



Re: Speed Problems

2007-09-25 Thread Bryan Irvine
What have you looked at? are you running pf? what kind of ruleset?
Tried simplifying it?

--Bryan

On 9/25/07, rezidue <[EMAIL PROTECTED]> wrote:
> I've been having problems with throughput on a box I'm using as an edge
> gateway.  I can't seem to get it to push out more than 150Mb/sec at about
> 20k pps.  It's a Tyan Thunder K8SR (S2881) board that has two gig broadcom
> interfaces on a shared pci-x bus.  It's on the bcm5704c chipset and I'm
> running OpenBSD 4.0.  The machine has two dual core amd opteron chips and
> two gigs of ram.  Barley any resources are being used when we are peaking
> during the day.  When we hit around 140+Mb/sec I start seeing packet loss
> and when I copy a file from this machine via scp to another host over the
> gig lan I can see that it directly affects throughput.  I've spent all day
> trying to find the problem but I've had no luck.  Any ideas?  Any info I can
> provide?



Speed Problems

2007-09-25 Thread rezidue
I've been having problems with throughput on a box I'm using as an edge
gateway.  I can't seem to get it to push out more than 150Mb/sec at about
20k pps.  It's a Tyan Thunder K8SR (S2881) board that has two gig broadcom
interfaces on a shared pci-x bus.  It's on the bcm5704c chipset and I'm
running OpenBSD 4.0.  The machine has two dual core amd opteron chips and
two gigs of ram.  Barley any resources are being used when we are peaking
during the day.  When we hit around 140+Mb/sec I start seeing packet loss
and when I copy a file from this machine via scp to another host over the
gig lan I can see that it directly affects throughput.  I've spent all day
trying to find the problem but I've had no luck.  Any ideas?  Any info I can
provide?



Re: vr(4) speed problems

2007-02-21 Thread Paul Irofti
On Wed, Feb 21, 2007 at 09:39:56AM +0200, Paul Irofti wrote:
> I'll send a new dmesg and notify if anything has changed when I get
> back.

Got back, it was the graphics card not the mobo, but I did ask him to
test my NIC too and he said it ran just fine (of course I asked for a
specific duplex test, but he just tested a network install of Windows so
this is not conclusive either).

I was thinking I could do some extra tests with snort..any suggestions?



Re: vr(4) speed problems

2007-02-20 Thread Paul Irofti
On Tue, Feb 20, 2007 at 09:59:49PM -0500, Nick Holland wrote:
> First of all, regarding your dmesg: that's a feature, not a bug.  The
> system attempts to store multiple dmesgs in RAM and keep them after a
> reboot.  This can be very handy at times, but disconcerting to those
> not expecting it.  The last one is the one you want.

Thanks for clearing that up for me.

> Yours is a different revision, but I suspect your problem isn't with
> the NIC.  Most network performance problems boil down to:
>   * Duplex issues
I'd put my money on this, but I don't know how to test in order to be
certain.
>   * b/B confusion (bits vs. Bytes per second)
Not the case.
>   * bad cables
Tested with multiple cables, to no avail.
>   * bad switches (ok, maybe that's just been my luck)
I have a TE100-S8P TRENDNet switch. It worked fine for me for years now,
and the only machine that has these sort of problems is the new one with
the vr(4) in it. Others at proper speeds like they used to.
> 
> If you really believe it isn't any of the above, tell us how your
> machine is hooked up, what you tested, what you expected, what you
> saw, etc.  Simply saying "it's slow" doesn't help us track it down
> much, you understand...
> 
I know just saying it's slow doesn't help. Don't know exactly how to
test this issue. Let me tell you my config and what I've tested so far.

This is a computer inside my home LAN. All the computers in my house are
connected, via the switch menitoned above, to an OpenBSD server. The NIC
is configured via DHCP. It's a common setup. I run OpenBSD exclusively
on my machines so I can't test if the NIC works well on another OS
(although I know/saw it ran just fine on Windows 2003 Server, the OS
that the ex-owner used with this compie).

My test have been pretty simple actually:
  - looking at ifconfig for proper duplex
  - investigating the server's pf.conf for quirks
  - ``ping -f''-ing, all went fine 0.00% loss
  - tcpdump-ing, nothing strange there either
  - changing the cable and it's position in the switch

If you need the output of any tests just ask. Same if additional are
required.

I'm going to warranty with this mobo today (it's an integrated NIC) due
to non NIC related issues (the AGP slot is broken). They'll probably
supply me with another mobo. Maybe that will fix this too. Or not.

I'll send a new dmesg and notify if anything has changed when I get
back.

Paul.



Re: vr(4) speed problems

2007-02-20 Thread Nick Holland
Paul Irofti wrote:
> On Tue, Feb 20, 2007 at 11:00:07PM +0200, Paul Irofti wrote:
>> Strange, the dmesg I submitted (and the one dmesg shows:) both point to
>> my configuration before the snapshot update. But the login informs me
>> that I'm running ``OpenBSD 4.1-beta (GENERIC) #847'' and uname says the
>> same:
>> 
>> $ uname -rsv
>> OpenBSD 4.1 GENERIC#847
> 
> After a reboot /var/run/dmesg.boot is a mess...well at least it shows
> the right version (a couple of times), an incorrect one and some garbage
> at the beginning. Attached.
...

Looks like I need to do some updates to the dmesg portions of the FAQ...

First of all, regarding your dmesg: that's a feature, not a bug.  The
system attempts to store multiple dmesgs in RAM and keep them after a
reboot.  This can be very handy at times, but disconcerting to those
not expecting it.  The last one is the one you want.

As for your performance problem, I don't currently have a vr(4) in
use, but I've got one in the machine I'm currently working on, and it
worked pretty decently last time I used it (switched to a different,
probably lower-grade NIC for non-performance reasons).

Mine:
> vr0 at pci0 dev 18 function 0 "VIA RhineII-2" rev 0x74: irq 3, address 
> 00:0c:6e:d5:dd:68
> rlphy1 at vr0 phy 1: RTL8201L 10/100 PHY, rev. 1

Yours:
> vr0 at pci0 dev 18 function 0 "VIA RhineII-2" rev 0x7c: irq 11, address 
> 00:e0:12:34:56:78
> rlphy0 at vr0 phy 1: RTL8201L 10/100 PHY, rev. 1

Yours is a different revision, but I suspect your problem isn't with
the NIC.  Most network performance problems boil down to:
  * Duplex issues
  * b/B confusion (bits vs. Bytes per second)
  * bad cables
  * bad switches (ok, maybe that's just been my luck)

If you really believe it isn't any of the above, tell us how your
machine is hooked up, what you tested, what you expected, what you
saw, etc.  Simply saying "it's slow" doesn't help us track it down
much, you understand...

Nick.



Re: vr(4) speed problems

2007-02-20 Thread Paul Irofti
On Tue, Feb 20, 2007 at 11:00:07PM +0200, Paul Irofti wrote:
> Strange, the dmesg I submitted (and the one dmesg shows:) both point to
> my configuration before the snapshot update. But the login informs me
> that I'm running ``OpenBSD 4.1-beta (GENERIC) #847'' and uname says the
> same:
> 
> $ uname -rsv
> OpenBSD 4.1 GENERIC#847

After a reboot /var/run/dmesg.boot is a mess...well at least it shows
the right version (a couple of times), an incorrect one and some garbage
at the beginning. Attached.
unction 3 "VIA PT890 Host" rev 0x00
pchb4 at pci0 dev 0 function 4 "VIA CN700 Host" rev 0x00
pchb5 at pci0 dev 0 function 7 "VIA CN700 Host" rev 0x00
ppb0 at pci0 dev 1 function 0 "VIA VT8377 AGP" rev 0x00
pci1 at ppb0 bus 1
vga1 at pci1 dev 0 function 0 "VIA S3 Unichrome PRO IGP" rev 0x01
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
pciide0 at pci0 dev 15 function 0 "VIA VT8251 SATA" rev 0x00: DMA
pciide0: using irq 10 for native-PCI interrupt
wd0 at pciide0 channel 0 drive 0: 
wd0: 16-sector PIO, LBA48, 76319MB, 156301488 sectors
wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 5
pciide1 at pci0 dev 15 function 1 "VIA VT82C571 IDE" rev 0x07: DMA, channel 0 
configured to compatibility, channel 1 configured to compatibility
atapiscsi0 at pciide1 channel 0 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0:  SCSI0 5/cdrom 
removable
cd0(pciide1:0:0): using PIO mode 4, DMA mode 2
pciide1: channel 1 disabled (no drives)
uhci0 at pci0 dev 16 function 0 "VIA VT83C572 USB" rev 0x90: irq 11
usb0 at uhci0: USB revision 1.0
uhub0 at usb0
uhub0: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1 at pci0 dev 16 function 1 "VIA VT83C572 USB" rev 0x90: irq 5
usb1 at uhci1: USB revision 1.0
uhub1 at usb1
uhub1: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
uhci2 at pci0 dev 16 function 2 "VIA VT83C572 USB" rev 0x90: irq 10
usb2 at uhci2: USB revision 1.0
uhub2 at usb2
uhub2: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub2: 2 ports with 2 removable, self powered
uhci3 at pci0 dev 16 function 3 "VIA VT83C572 USB" rev 0x90: irq 3
usb3 at uhci3: USB revision 1.0
uhub3 at usb3
uhub3: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub3: 2 ports with 2 removable, self powered
ehci0 at pci0 dev 16 function 4 "VIA VT6202 USB" rev 0x90: irq 5
usb4 at ehci0: USB revision 2.0
uhub4 at usb4
uhub4: VIA EHCI root hub, rev 2.00/1.00, addr 1
uhub4: 8 ports with 8 removable, self powered
pcib0 at pci0 dev 17 function 0 "VIA VT8251 ISA" rev 0x00
pchb6 at pci0 dev 17 function 7 "VIA VT8251 VLINK" rev 0x00
vr0 at pci0 dev 18 function 0 "VIA RhineII-2" rev 0x7c: irq 11, address 
00:e0:12:34:56:78
rlphy0 at vr0 phy 1: RTL8201L 10/100 PHY, rev. 1
ppb1 at pci0 dev 19 function 0 "VIA VT8251 PCIE" rev 0x00
pci2 at ppb1 bus 2
ppb2 at pci2 dev 0 function 0 "VIA VT8251 PCIE" rev 0x00
pci3 at ppb2 bus 3
ppb3 at pci2 dev 0 function 1 "VIA VT8251 PCIE" rev 0x00
pci4 at ppb3 bus 4
ppb4 at pci0 dev 19 function 1 "VIA VT8251 PCI" rev 0x00
pci5 at ppb4 bus 5
cmpci0 at pci5 dev 9 function 0 "C-Media Electronics CMI8738/C3DX Audio" rev 
0x10: irq 5
audio0 at cmpci0
opl at cmpci0 not configured
mpu at cmpci0 not configured
isa0 at pcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pmsi0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pmsi0 mux 0
pcppi0 at isa0 port 0x61
midi0 at pcppi0: 
spkr0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
it2 at isa0 port 0xd00/8: IT87
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
fd0 at fdc0 drive 0: 1.44MB 80 cyl, 2 head, 18 sec
dkcsum: wd0 matches BIOS drive 0x80
root on wd0a
rootdev=0x0 rrootdev=0x300 rawdev=0x302
/dev/wd0k: file system not clean; please fsck(8)
syncing disks... 
OpenBSD 4.1-beta (RAMDISK_CD) #1028: Mon Feb 19 18:15:32 MST 2007
[EMAIL PROTECTED]:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
real mem = 1005907968 (982332K)
avail mem = 850620416 (830684K)
using 22937 buffers containing 100798464 bytes (98436K) of memory
mainbus0 (root)
acpi at mainbus0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Pentium(R) 4 CPU 2.80GHz, 2793.48 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,TM2,CNXT-ID,CX16,xTPR,NXE,LONG
cpu0: 1MB 64b/line 8-way L2 cache
pci0 at mainbus0 bus 0: configuration mode 1
pchb0 at pci0 dev 0 function 0 "VIA CN700 Host" rev 0x00
pchb1 at pci0 dev 0 function 1 "VIA CN700 Host" rev 0x00
pchb2 at pci0 dev 0 function 2 "VIA CN700 Host" rev 0x00
pchb3 at pci0 dev 0 function 3 "VIA PT890 Host" rev 0x00
pchb4 at pci0 dev 0 function 4 "VIA CN700 Host" rev 0x00
pchb5 at pci0 dev 0 function 7 "VIA CN700 Host" rev 0x00
ppb0 at pci0 dev 1 function 0 "VIA VT83

vr(4) speed problems

2007-02-20 Thread Paul Irofti
I've received a new mobo with a vr(4) NIC. Ever since I installed it I'm
getting very slow transfer speeds (i.e. from 7-8 mb/s to 0.3-0.4 mb/s).
I've googled and stfa and found some complaints on the freebsd mailing
lists but wasn't able to find a solution. Is this a known bug? Should I
just buy another NIC and get it over with?

$ ifconfig vr0
vr0: flags=8843 mtu 1500
  lladdr 00:e0:12:34:56:78
  groups: egress
  media: Ethernet autoselect (100baseTX full-duplex)
  status: active
  inet6 fe80::2e0:12ff:fe34:5678%vr0 prefixlen 64 scopeid 0x1
  inet 192.168.1.64 netmask 0xff00 broadcast 192.168.1.255

As ifconfig shows this should be running just fine at fast ethernet with
full-duplex...

Attached is my dmesg (just updated to the latest snapshot).
OpenBSD 4.0-current (GENERIC) #822: Sun Feb  4 04:09:47 MST 2007
[EMAIL PROTECTED]:/usr/src/sys/arch/amd64/compile/GENERIC
real mem = 1005907968 (982332K)
avail mem = 849092608 (829192K)
using 22937 buffers containing 100798464 bytes (98436K) of memory
mainbus0 (root)
bios0 at mainbus0: SMBIOS rev. 2.3 @ 0xf0730 (59 entries)
bios0: ASUSTeK Computer INC. P5VDC-MX
acpi at mainbus0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Pentium(R) 4 CPU 2.80GHz, 2793.49 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,NXE,LONG
cpu0: 1MB 64b/line 8-way L2 cache
pci0 at mainbus0 bus 0: configuration mode 1
pchb0 at pci0 dev 0 function 0 "VIA CN700 Host" rev 0x00
pchb1 at pci0 dev 0 function 1 "VIA CN700 Host" rev 0x00
pchb2 at pci0 dev 0 function 2 "VIA CN700 Host" rev 0x00
pchb3 at pci0 dev 0 function 3 "VIA PT890 Host" rev 0x00
pchb4 at pci0 dev 0 function 4 "VIA CN700 Host" rev 0x00
pchb5 at pci0 dev 0 function 7 "VIA CN700 Host" rev 0x00
ppb0 at pci0 dev 1 function 0 "VIA VT8377 AGP" rev 0x00
pci1 at ppb0 bus 1
vga1 at pci1 dev 0 function 0 "VIA S3 Unichrome PRO IGP" rev 0x01
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
pciide0 at pci0 dev 15 function 0 "VIA VT8251 SATA" rev 0x00: DMA
pciide0: using irq 10 for native-PCI interrupt
wd0 at pciide0 channel 0 drive 0: 
wd0: 16-sector PIO, LBA48, 76319MB, 156301488 sectors
wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 5
pciide1 at pci0 dev 15 function 1 "VIA VT82C571 IDE" rev 0x07: DMA, channel 0 
configured to compatibility, channel 1 configured to compatibility
atapiscsi0 at pciide1 channel 0 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0:  SCSI0 5/cdrom 
removable
cd0(pciide1:0:0): using PIO mode 4, DMA mode 2
pciide1: channel 1 disabled (no drives)
uhci0 at pci0 dev 16 function 0 "VIA VT83C572 USB" rev 0x90: irq 11
usb0 at uhci0: USB revision 1.0
uhub0 at usb0
uhub0: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1 at pci0 dev 16 function 1 "VIA VT83C572 USB" rev 0x90: irq 5
usb1 at uhci1: USB revision 1.0
uhub1 at usb1
uhub1: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
uhci2 at pci0 dev 16 function 2 "VIA VT83C572 USB" rev 0x90: irq 10
usb2 at uhci2: USB revision 1.0
uhub2 at usb2
uhub2: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub2: 2 ports with 2 removable, self powered
uhci3 at pci0 dev 16 function 3 "VIA VT83C572 USB" rev 0x90: irq 3
usb3 at uhci3: USB revision 1.0
uhub3 at usb3
uhub3: VIA UHCI root hub, rev 1.00/1.00, addr 1
uhub3: 2 ports with 2 removable, self powered
ehci0 at pci0 dev 16 function 4 "VIA VT6202 USB" rev 0x90: irq 5
usb4 at ehci0: USB revision 2.0
uhub4 at usb4
uhub4: VIA EHCI root hub, rev 2.00/1.00, addr 1
uhub4: 8 ports with 8 removable, self powered
pcib0 at pci0 dev 17 function 0 "VIA VT8251 ISA" rev 0x00
pchb6 at pci0 dev 17 function 7 "VIA VT8251 VLINK" rev 0x00
vr0 at pci0 dev 18 function 0 "VIA RhineII-2" rev 0x7c: irq 11, address 
00:e0:12:34:56:78
rlphy0 at vr0 phy 1: RTL8201L 10/100 PHY, rev. 1
ppb1 at pci0 dev 19 function 0 "VIA VT8251 PCIE" rev 0x00
pci2 at ppb1 bus 2
ppb2 at pci2 dev 0 function 0 "VIA VT8251 PCIE" rev 0x00
pci3 at ppb2 bus 3
ppb3 at pci2 dev 0 function 1 "VIA VT8251 PCIE" rev 0x00
pci4 at ppb3 bus 4
ppb4 at pci0 dev 19 function 1 "VIA VT8251 PCI" rev 0x00
pci5 at ppb4 bus 5
cmpci0 at pci5 dev 9 function 0 "C-Media Electronics CMI8738/C3DX Audio" rev 
0x10: irq 5
audio0 at cmpci0
opl at cmpci0 not configured
mpu at cmpci0 not configured
isa0 at pcib0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pmsi0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pmsi0 mux 0
pcppi0 at isa0 port 0x61
midi0 at pcppi0: 
spkr0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
it2 at isa0 port 0xd00/8: IT87
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
fd0 at fdc0 drive 0: 1.44MB 80 cyl, 2 head, 18 sec
dkcsum: wd0 matches BIOS drive 0

Re: vr(4) speed problems

2007-02-20 Thread Paul Irofti
Strange, the dmesg I submitted (and the one dmesg shows:) both point to
my configuration before the snapshot update. But the login informs me
that I'm running ``OpenBSD 4.1-beta (GENERIC) #847'' and uname says the
same:

$ uname -rsv
OpenBSD 4.1 GENERIC#847