ral0: "ping: sendto: No buffer space available"

2008-10-05 Thread Jurjen Oskam
Hi there,

My ral0 in hostap mode works great, and has been for months. (Except for the
Nintendo Wii I posted about earlier, and I suspect the Wii.)

However, just now I ran into a problem. The ral is used in a server that
also has an Internet-connection. The server does NAT for the wireless
clients. I was uploading some photos from a wireless client to some
Website, and wireless traffic just stopped. The association was still
there, but from the wireless client nothing went through. Not even traffic
to the server itself. I then logged into the machine the ral0 interface
is in (using its console). Nothing in dmesg, but when I tried to ping the
wireless client, I got this:

ping: sendto: No buffer space available
ping: wrote 192.168.2.18 64 chars, ret=-1

Other traffic on the same server (using other interfaces) worked fine.
What can I do to more thoroughly investigate this, should this happen
again? It seems to be a very rare event, since it took months to even occur
once. I didn't think of anything more than to perform some netstat
commands, and check the number of states using pfctl.

netstat -m showed:

718 mbufs in use:
705 mbufs allocated to data
4 mbufs allocated to packet headers
9 mbufs allocated to socket names and addresses
87/128/6144 mbuf clusters in use (current/peak/max)
444 Kbytes allocated to network (79% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

netstat -an showed:

Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
ip 0  0  *.**.*1
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
tcp0  0  192.168.1.1.6161   192.168.1.1.53 TIME_WAIT
tcp0  0  127.0.0.1.735  127.0.0.1.111  TIME_WAIT
tcp0  0  192.168.1.1.48623  192.168.1.1.53 TIME_WAIT
tcp0  0  127.0.0.1.983  127.0.0.1.111  TIME_WAIT
tcp0  0  192.168.1.1.53 192.168.1.1.5394   TIME_WAIT
tcp0  0  192.168.1.1.22 192.168.2.18.59972
ESTABLISHED
tcp0  0  *.25   *.*LISTEN
tcp0  0  127.0.0.1.3493 127.0.0.1.12189
ESTABLISHED
tcp0  0  127.0.0.1.12189127.0.0.1.3493
ESTABLISHED
tcp0  0  *.3493 *.*LISTEN
tcp0  0  127.0.0.1.8026 *.*LISTEN
tcp0  0  *.8025 *.*LISTEN
tcp0  0  *.22   *.*LISTEN
tcp0  0  *.37   *.*LISTEN
tcp0  0  *.13   *.*LISTEN
tcp0  0  *.113  *.*LISTEN
tcp0  0  *.21   *.*LISTEN
tcp0  0  192.168.1.1.53 *.*LISTEN
tcp0  0  82.95.239.113.53   *.*LISTEN
tcp0  0  82.95.239.113.80   *.*LISTEN
tcp0  0  *.653  *.*LISTEN
tcp0  0  *.2049 *.*LISTEN
tcp0  0  *.800  *.*LISTEN
tcp0  0  127.0.0.1.111  *.*LISTEN
tcp0  0  *.111  *.*LISTEN
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
udp0  0  82.95.239.113.4619688.191.14.223.123
udp0  0  82.95.239.113.3918878.47.5.66.123
udp0  0  82.95.239.113.24467141.40.103.101.123
udp0  0  82.95.239.113.21396212.13.207.101.123
udp0  0  82.95.239.113.20148194.88.2.88.123
udp0  0  *.5060 *.*
udp0  0  127.0.0.1.512  *.*
udp0  0  *.69   *.*
udp0  0  82.95.239.113.53   *.*
udp0  0  192.168.1.1.53 *.*
udp0  0  127.0.0.1.775  127.0.0.1.2049
udp0  0  192.168.1.1.123*.*
udp0  0  192.168.2.1.123*.*
udp0  0  82.95.239.113.123  *.*
udp0  0  127.0.0.1.123  *.*
udp0  0  *.734  *.*
udp0  0  *.2049 *.*
udp0  0  *.756  *.*
udp0  0  127.0.0.1.111  *.*
udp0  0  *.111  *.*
udp0  0  *.514  *.*
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(

Re: "ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-22 Thread Joe Warren-Meeks
On Mon, Jul 21, 2008 at 10:53:23AM -0600, Daniel Melameth wrote:
> On Mon, Jul 21, 2008 at 10:39 AM, Joe Warren-Meeks <[EMAIL PROTECTED]> wrote:
> > The default limit for number of states is quite low. Try adding the
> > following to pf.conf and running pfctl -vf /etc/pf.conf
> > "
> > set limit { states 5000, frags 5000, src-nodes 5000 }
> > "
> >
> > You can up the values if they are too low.
> >
> > Use "pfctl -s info" to view how many entries there are in the state
> > table beforehand and compare it to afterwards.
> 
> FWIW, the default state and src-nodes limit is twice what you have above.

Oops you are right :-) That was meant to be 5, not 5000. I have mine
set to 500,000, as we have loads of ram and a load of busy sites. 

 -- joe.

It'll cost you many a shilling.



Re: "ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-21 Thread Daniel Melameth
On Mon, Jul 21, 2008 at 10:39 AM, Joe Warren-Meeks <[EMAIL PROTECTED]> wrote:
> The default limit for number of states is quite low. Try adding the
> following to pf.conf and running pfctl -vf /etc/pf.conf
> "
> set limit { states 5000, frags 5000, src-nodes 5000 }
> "
>
> You can up the values if they are too low.
>
> Use "pfctl -s info" to view how many entries there are in the state
> table beforehand and compare it to afterwards.

FWIW, the default state and src-nodes limit is twice what you have above.



Re: "ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-21 Thread Joe Warren-Meeks
On Mon, Jul 21, 2008 at 03:55:41PM +0200, Amaury De Ganseman wrote:

Hey there,
 
> I run OpenBSD 4.3 on my gateway. But when a machine behind the
> NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
> For example when I try to do a ping www.google.com I can see "ping:
> sendto: No buffer space available" (on my gateway)
> It's the same if I use gtk-gnutella. I think it's related to the huge
> number of states  (about 1500 for bittorent)

The default limit for number of states is quite low. Try adding the
following to pf.conf and running pfctl -vf /etc/pf.conf
"
set limit { states 5000, frags 5000, src-nodes 5000 } 
"

You can up the values if they are too low. 

Use "pfctl -s info" to view how many entries there are in the state
table beforehand and compare it to afterwards.

HTH.

 -- joe.

Denim is old news. Who wants to look like a member of B*witched?



Re: "ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-21 Thread Giancarlo Razzolini
Daniel Melameth escreveu:
> Does the issue go away when you make ping part of the high priority queue?
>
> On Mon, Jul 21, 2008 at 7:55 AM, Amaury De Ganseman
> <[EMAIL PROTECTED]> wrote:
>   
>> Hi all,
>>
>> I run OpenBSD 4.3 on my gateway. But when a machine behind the
>> NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
>> For example when I try to do a ping www.google.com I can see "ping:
>> sendto: No buffer space available" (on my gateway)
>> It's the same if I use gtk-gnutella. I think it's related to the huge
>> number of states  (about 1500 for bittorent)
>>
>> Any idea to the problem ?
>>
>> Thanks for help.
>>
>>
>> Amaury De Ganseman
>> infos:
>> -
>>
>> dmesg in attach
>>
>> I use user-space pppoe
>>
>> I use traffic shaping:
>>
>> #ALTQ CBQ
>>
>> altq on $ext_if cbq bandwidth 350Kb queue {std}
>>
>> queue std bandwidth 100% { hprio, lprio}
>> queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
>> queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)
>>
>>
>>
>> gateway# netstat -m
>> 37 mbufs in use:
>>32 mbufs allocated to data
>>1 mbuf allocated to packet headers
>>4 mbufs allocated to socket names and addresses
>> 32/306/6144 mbuf clusters in use (current/peak/max)
>> 708 Kbytes allocated to network (10% in use)
>> 0 requests for memory denied
>> 0 requests for memory delayed
>> 0 calls to protocol drain routines
>>
>> I use user-space pppoe
>> 
>
>
>   
Take a look at the list archives. I recall someone having the same
issue, and it was something to do with the network card. It was a
hardware problem. I already had this sometimes, and only rebooting the
machine solved the problem.

My regards,

-- 
Giancarlo Razzolini
http://lock.razzolini.adm.br
Linux User 172199
Red Hat Certified Engineer no:804006389722501
Verify:https://www.redhat.com/certification/rhce/current/
Moleque Sem Conteudo Numero #002
OpenBSD Stable
Ubuntu 8.04 Hardy Heron
4386 2A6F FFD4 4D5F 5842  6EA0 7ABE BBAB 9C0E 6B85



Re: "ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-21 Thread Daniel Melameth
Does the issue go away when you make ping part of the high priority queue?

On Mon, Jul 21, 2008 at 7:55 AM, Amaury De Ganseman
<[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I run OpenBSD 4.3 on my gateway. But when a machine behind the
> NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
> For example when I try to do a ping www.google.com I can see "ping:
> sendto: No buffer space available" (on my gateway)
> It's the same if I use gtk-gnutella. I think it's related to the huge
> number of states  (about 1500 for bittorent)
>
> Any idea to the problem ?
>
> Thanks for help.
>
>
> Amaury De Ganseman
> infos:
> -
>
> dmesg in attach
>
> I use user-space pppoe
>
> I use traffic shaping:
>
> #ALTQ CBQ
>
> altq on $ext_if cbq bandwidth 350Kb queue {std}
>
> queue std bandwidth 100% { hprio, lprio}
> queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
> queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)
>
>
>
> gateway# netstat -m
> 37 mbufs in use:
>32 mbufs allocated to data
>1 mbuf allocated to packet headers
>4 mbufs allocated to socket names and addresses
> 32/306/6144 mbuf clusters in use (current/peak/max)
> 708 Kbytes allocated to network (10% in use)
> 0 requests for memory denied
> 0 requests for memory delayed
> 0 calls to protocol drain routines
>
> I use user-space pppoe



"ping: sendto: No buffer space available" when using bittorrent or another p2p

2008-07-21 Thread Amaury De Ganseman
Hi all,

I run OpenBSD 4.3 on my gateway. But when a machine behind the
NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
For example when I try to do a ping www.google.com I can see "ping:
sendto: No buffer space available" (on my gateway)
It's the same if I use gtk-gnutella. I think it's related to the huge
number of states  (about 1500 for bittorent)

Any idea to the problem ?

Thanks for help.


Amaury De Ganseman
infos:
-

dmesg in attach

I use user-space pppoe

I use traffic shaping:

#ALTQ CBQ

altq on $ext_if cbq bandwidth 350Kb queue {std}

queue std bandwidth 100% { hprio, lprio}
queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)



gateway# netstat -m
37 mbufs in use:
32 mbufs allocated to data
1 mbuf allocated to packet headers
4 mbufs allocated to socket names and addresses
32/306/6144 mbuf clusters in use (current/peak/max)
708 Kbytes allocated to network (10% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

I use user-space pppoe

[demime 1.01d removed an attachment of type application/octet-stream which had 
a name of obsd.dmesg]



Gettnig sendto no buffer space available errors... irq problem?

2007-09-21 Thread Joe Gibbens
I'm seeing some sendto: No buffer space available errors along with some ssh
session hangs.  The symptoms are intermitent and look a lot like this
thread.
http://monkey.org/openbsd/archive/misc/0309/msg00827.html
The system is 4.1 stable generic with the sangoma wanpipe driver.  Most
traffic is moving between the t1 card and em0.

Is this probably an irq issue?
If so,
Is there any reason not to put the nics and the wic on the same irq? (is the
context switching advantage still valid?)
Does it make any real difference what the other controllers are on as long
as they avoid each other and em*, bge0, and sdla1?
If its more complicated than that, can anyone point me to some docs?

thanks in advance

--
Joe

$ dmesg
OpenBSD 4.1 (GENERIC) #1: Sat Sep  8 20:23:34 CDT 2007
#:/usr/src/sys/arch/i386/compile/GENERIC<[EMAIL 
PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC>
cpu0: Intel(R) Pentium(R) 4 CPU 2.80GHz ("GenuineIntel" 686-class) 2.81 GHz
cpu0:
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,SSE3,MWAIT,DS-CPL,CNXT-ID,CX16,xTPR
real mem  = 267747328 (261472K)
avail mem = 236670976 (231124K)
using 3299 buffers containing 13512704 bytes (13196K) of memory
mainbus0 (root)
bios0 at mainbus0: AT/286+ BIOS, date 03/24/06, BIOS32 rev. 0 @ 0xffe90,
SMBIOS rev. 2.3 @ 0xfa3d0 (48 entries)
bios0: Dell Computer Corporation PowerEdge 830
pcibios0 at bios0: rev 2.1 @ 0xf/0x1
pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfb900/208 (11 entries)
pcibios0: PCI Interrupt Router at 000:31:0 ("Intel 82801GB LPC" rev 0x00)
pcibios0: PCI bus #6 is the last bus
bios0: ROM list: 0xc/0x8000 0xc8000/0x1000 0xc9000/0x1600
0xec000/0x4000!
acpi at mainbus0 not configured
ipmi0 at mainbus0: version 1.5 interface KCS iobase 0xca8/8 spacing 4
cpu0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 "Intel E7230 MCH" rev 0x00
ppb0 at pci0 dev 1 function 0 "Intel E7230 PCIE" rev 0x00
pci1 at ppb0 bus 1
ppb1 at pci0 dev 28 function 0 "Intel 82801GB PCIE" rev 0x01
pci2 at ppb1 bus 2
ppb2 at pci2 dev 0 function 0 "Intel PCIE-PCIE" rev 0x09
pci3 at ppb2 bus 3
em0 at pci3 dev 2 function 0 "Intel PRO/1000MT (82546GB)" rev 0x03: irq 3,
address 00:04:23:cb:75:f2
em1 at pci3 dev 2 function 1 "Intel PRO/1000MT (82546GB)" rev 0x03: irq 11,
address 00:04:23:cb:75:f3
ppb3 at pci0 dev 28 function 4 "Intel 82801G PCIE" rev 0x01
pci4 at ppb3 bus 4
bge0 at pci4 dev 0 function 0 "Broadcom BCM5721" rev 0x11, BCM5750 B1
(0x4101): irq 10, address 00:15:c5:5e:65:aa
brgphy0 at bge0 phy 1: BCM5750 10/100/1000baseT PHY, rev. 0
ppb4 at pci0 dev 28 function 5 "Intel 82801G PCIE" rev 0x01
pci5 at ppb4 bus 5
uhci0 at pci0 dev 29 function 0 "Intel 82801GB USB" rev 0x01: irq 11
usb0 at uhci0: USB revision 1.0
uhub0 at usb0
uhub0: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1 at pci0 dev 29 function 1 "Intel 82801GB USB" rev 0x01: irq 10
usb1 at uhci1: USB revision 1.0
uhub1 at usb1
uhub1: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
uhci2 at pci0 dev 29 function 2 "Intel 82801GB USB" rev 0x01: irq 5
usb2 at uhci2: USB revision 1.0
uhub2 at usb2
uhub2: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub2: 2 ports with 2 removable, self powered
ehci0 at pci0 dev 29 function 7 "Intel 82801GB USB" rev 0x01: irq 11
usb3 at ehci0: USB revision 2.0
uhub3 at usb3
uhub3: Intel EHCI root hub, rev 2.00/1.00, addr 1
uhub3: 6 ports with 6 removable, self powered
ppb5 at pci0 dev 30 function 0 "Intel 82801BA AGP" rev 0xe1
pci6 at ppb5 bus 6
sdla1 at pci6 dev 0 function 0 "Sangoma A10x" rev 0x01 irq 11
sdla1: Sangoma AFT-A101 T1/E1 adapter
vga1 at pci6 dev 5 function 0 "XGI Technology Volari Z7" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
ichpcib0 at pci0 dev 31 function 0 "Intel 82801GB LPC" rev 0x01: PM disabled
pciide0 at pci0 dev 31 function 1 "Intel 82801GB IDE" rev 0x01: DMA, channel
0 configured to compatibility, channel 1 configured to compatibility
atapiscsi0 at pciide0 channel 0 drive 0
scsibus0 at atapiscsi0: 2 targets
cd0 at scsibus0 targ 0 lun 0:  SCSI0
5/cdrom removable
cd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 2
pciide0: channel 1 ignored (disabled)
pciide1 at pci0 dev 31 function 2 "Intel 82801GB SATA" rev 0x01: DMA,
channel 0 configured to native-PCI, channel 1 configured to native-PCI
pciide1: using irq 11 for native-PCI interrupt
wd0 at pciide1 channel 0 drive 0: 
wd0: 16-sector PIO, LBA48, 76293MB, 15625 sectors
wd0(pciide1:0:0): using PIO mode 4, Ultra-DMA mode 5
ichiic0 at pci0 dev 31 function 3 "Intel 82801GB SMBus&q

Re: sendto: No buffer space available

2007-03-19 Thread Stefan Sperling
On Mon, Mar 19, 2007 at 01:40:02PM +0100, Walter Doerr wrote:
> Hello,
>
> I am using an OpenBSD 4.0 box connected to a 2Mbit SDSL line in
> Germany (using user space PPP).
>
> When pinging a host across the SDSL line, I get an occasional
> "sendto: No buffer space available" message:
>
>
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=566 ttl=254 time=62.674 ms
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=568 ttl=254 time=38.090 ms
> ping: sendto: No buffer space available
> ping: wrote xxx.xxx.xx 64 chars, ret=-1
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=569 ttl=254 time=1320.651 ms
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=571 ttl=254 time=35.792 ms

That happens to me when I use ppp for VPN in conjunction with pptp.
If the VPN server cannot be reached for a while, e.g. because the link
the VPN tunnel runs on top of is suddenly brought down, ppp will
continue to buffer packets until its buffer is full. Then I see
that message when I try to send more packets.

> Does this message point to a problem within OpenBSD or is this a
> problem with the SDSL line?

No clue. You could enable very verbose logging (set log All)
and see if you find some indication in the logs.

If you use pppoe for your SDSL, you might want to give pppoe(4) a try.
It's in-kernel and has less overhead because it does not context switch
as much as user-ppp.

--
stefan
http://stsp.in-berlin.de PGP Key: 0xF59D25F0

[demime 1.01d removed an attachment of type application/pgp-signature]



sendto: No buffer space available

2007-03-19 Thread Walter Doerr
Hello,

I am using an OpenBSD 4.0 box connected to a 2Mbit SDSL line in
Germany (using user space PPP).

When pinging a host across the SDSL line, I get an occasional   
"sendto: No buffer space available" message:


64 bytes from xxx.xxx.xxx.xxx: icmp_seq=566 ttl=254 time=62.674 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=568 ttl=254 time=38.090 ms
ping: sendto: No buffer space available
ping: wrote xxx.xxx.xx 64 chars, ret=-1
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=569 ttl=254 time=1320.651 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=571 ttl=254 time=35.792 ms

Does this message point to a problem within OpenBSD or is this a
problem with the SDSL line?

Why is the ping packet not simply dropped but rather delayed?

I have googled for the error message and some replies indicated that
it is a problem within some ethernet card drivers, so I switched from
fxp to em but the problem persists.


This is the output of netstat -m in case it matters:

443 mbufs in use:
437 mbufs allocated to data
3 mbufs allocated to packet headers
3 mbufs allocated to socket names and addresses
436/552/6144 mbuf clusters in use (current/peak/max)
1248 Kbytes allocated to network (78% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines


Any help is greatly appreciated.


Regards,

-Walter Doerr



Re: sendto: No buffer space available

2007-03-15 Thread Martin Schröder

2007/3/15, Claudio Jeker <[EMAIL PROTECTED]>:

I think I mentionened this already a few times but I'll do it again.
"sendto: No buffer space available" means an ENOBUF error was returned.
On modern systems ENOBUF is almost only generated by the interfaces and
their queues (e.g. if you enable a too restrictive altq limit).
So if you have altq enabled I would look at the pfctl -sq -vv output.


I have the same problem, but disabling altq doesn't help.

I can easily repeat it: Firewall is a K6/3-400 with 4.0, sis(tun0) and
rl running squid. If the client (Linux 2.6.16 (SUSE 10.1)) runs at
least two downloads with FireFox and DownThemAll, i.e. more than ca. 4
http requests in parallel, the network will stop occasionally, but
recover.


A possible workaround is to switch to the kernel pppoe(4) version.


Which doesn't do everything pppoe(8) does. :-{

Best
  Martin



Re: sendto: No buffer space available

2007-03-15 Thread Claudio Jeker
On Thu, Mar 15, 2007 at 05:42:48PM +0100, Walter Doerr wrote:
> Hello,
> 
> I am using an OpenBSD 4.0 box connected to a 2Mbit SDSL line in
> Germany (using user space PPP).
> 
> When pinging a host across the SDSL line, I get an occasional
> "sendto: No buffer space available" message:
> 
> 
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=566 ttl=254 time=62.674 ms
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=568 ttl=254 time=38.090 ms
> ping: sendto: No buffer space available
> ping: wrote xxx.xxx.xx 64 chars, ret=-1
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=569 ttl=254 time=1320.651 ms
> 64 bytes from xxx.xxx.xxx.xxx: icmp_seq=571 ttl=254 time=35.792 ms
> 
> Does this message point to a problem within OpenBSD or is this a
> problem with the SDSL line?
> 
> Why is the ping packet not simply dropped but rather delayed?
> 
> I have googled for the error message and some replies indicated that
> it is a problem within some ethernet card drivers, so I switched from
> fxp to em but the problem persists.
> 

I think I mentionened this already a few times but I'll do it again.
"sendto: No buffer space available" means an ENOBUF error was returned.
On modern systems ENOBUF is almost only generated by the interfaces and
their queues (e.g. if you enable a too restrictive altq limit).
So if you have altq enabled I would look at the pfctl -sq -vv output.

I doubt it is the fxp/em card -- your pinging the other side of the SDSL
line so the traffic flows first through tun(4).
The interface queue on tun(4) can get full because userland ppp fails to
read fast enough or blocks for some time.

As the ping is delayed by 1 second I think ppp blocked and stopped reading
/dev/tun0 for around 1 second. The 1 Mio. Dollar question is why did it
block.

A possible workaround is to switch to the kernel pppoe(4) version.
-- 
:wq Claudio



sendto: No buffer space available

2007-03-15 Thread Walter Doerr
Hello,

I am using an OpenBSD 4.0 box connected to a 2Mbit SDSL line in
Germany (using user space PPP).

When pinging a host across the SDSL line, I get an occasional
"sendto: No buffer space available" message:


64 bytes from xxx.xxx.xxx.xxx: icmp_seq=566 ttl=254 time=62.674 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=568 ttl=254 time=38.090 ms
ping: sendto: No buffer space available
ping: wrote xxx.xxx.xx 64 chars, ret=-1
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=569 ttl=254 time=1320.651 ms
64 bytes from xxx.xxx.xxx.xxx: icmp_seq=571 ttl=254 time=35.792 ms

Does this message point to a problem within OpenBSD or is this a
problem with the SDSL line?

Why is the ping packet not simply dropped but rather delayed?

I have googled for the error message and some replies indicated that
it is a problem within some ethernet card drivers, so I switched from
fxp to em but the problem persists.


This is the output of netstat -m in case it matters:

443 mbufs in use:
437 mbufs allocated to data
3 mbufs allocated to packet headers
3 mbufs allocated to socket names and addresses
436/552/6144 mbuf clusters in use (current/peak/max)
1248 Kbytes allocated to network (78% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines


Any help is greatly appreciated.


Regards,

-Walter Doerr



Re: ping: sendto: No buffer space available (SOLVED)

2006-07-14 Thread Jason Dixon

On Jul 14, 2006, at 2:32 PM, Kian Mohageri wrote:


On 7/14/06, Jason Dixon <[EMAIL PROTECTED]> wrote:


We have an OpenBSD 3.8 firewall that has been in production for the
last six months.  Until the last week or two, everything has been
great.  Recently while diagnosing a problem with the bonded T1 pair,
I noticed the following error while pinging the gateway:

ping: sendto: No buffer space available

This always coincided with a very high spike (1000-3000ms) in
latency, which would usually go back down to ~0ms and operate
normally.  The interface in question is an Intel em connected to a
Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working
fine.  The LAN interface (em1) pushes *much* more data, as it routes
between 13 internal VLANs.  I've also had another box perform the
same ping test concurrently to confirm this isn't a problem with the
gateway.


This is the same behavior I would see when trying to ping out our  
internal
em(4) interface when the transmit queue filled up (or it was  
thought to be
full).  You can confirm that is the case by checking ifconfig (look  
for

OACTIVE).


I'm not seeing OACTIVE on em0, vlan1 (vlandev em0) or carp1 (carpdev  
vlan1).



But, does that interface ever fail completely and require an interface
restart, or just spike?


I found the problem.  It ended up being excessive (read: BAD)  
queueing on my part.  In order to further optimize VoIP/VPN traffic  
for some remote offices, I recently removed the borrow option from  
some parent CBQ queues.  This was causing much more packet buffering  
than you would normally see.  It makes me wonder though... how does  
one evaluate the buffers on an interface used by altq?


Thanks,

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



Re: ping: sendto: No buffer space available

2006-07-14 Thread Kian Mohageri
On 7/14/06, Jason Dixon <[EMAIL PROTECTED]> wrote:
>
> We have an OpenBSD 3.8 firewall that has been in production for the
> last six months.  Until the last week or two, everything has been
> great.  Recently while diagnosing a problem with the bonded T1 pair,
> I noticed the following error while pinging the gateway:
>
> ping: sendto: No buffer space available
>
> This always coincided with a very high spike (1000-3000ms) in
> latency, which would usually go back down to ~0ms and operate
> normally.  The interface in question is an Intel em connected to a
> Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working
> fine.  The LAN interface (em1) pushes *much* more data, as it routes
> between 13 internal VLANs.  I've also had another box perform the
> same ping test concurrently to confirm this isn't a problem with the
> gateway.




This is the same behavior I would see when trying to ping out our internal
em(4) interface when the transmit queue filled up (or it was thought to be
full).  You can confirm that is the case by checking ifconfig (look for
OACTIVE).

But, does that interface ever fail completely and require an interface
restart, or just spike?

Kian



ping: sendto: No buffer space available

2006-07-14 Thread Jason Dixon
We have an OpenBSD 3.8 firewall that has been in production for the  
last six months.  Until the last week or two, everything has been  
great.  Recently while diagnosing a problem with the bonded T1 pair,  
I noticed the following error while pinging the gateway:


ping: sendto: No buffer space available

This always coincided with a very high spike (1000-3000ms) in  
latency, which would usually go back down to ~0ms and operate  
normally.  The interface in question is an Intel em connected to a  
Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working  
fine.  The LAN interface (em1) pushes *much* more data, as it routes  
between 13 internal VLANs.  I've also had another box perform the  
same ping test concurrently to confirm this isn't a problem with the  
gateway.


I've seen other posts in the misc@ archives for this error, but the  
only solutions were patches to OpenBSD 3.7 or specific non-em drivers  
(xl, fxp).  Does anyone have an idea what might be causing this?



OpenBSD 3.8 (GENERIC) #138: Sat Sep 10 15:41:37 MDT 2005
[EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Celeron(R) CPU 2.60GHz ("GenuineIntel" 686-class) 2.60  
GHz
cpu0:  
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36, 
CFLUSH,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,CNXT-ID

real mem  = 266838016 (260584K)
avail mem = 236597248 (231052K)
using 3282 buffers containing 13443072 bytes (13128K) of memory
mainbus0 (root)
bios0 at mainbus0: AT/286+(8b) BIOS, date 03/09/05, BIOS32 rev. 0 @  
0xfb770

apm0 at bios0: Power Management spec V1.2
apm0: AC on, battery charge unknown
apm0: flags 70102 dobusy 1 doidle 1
pcibios0 at bios0: rev 2.1 @ 0xf/0xdf64
pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfde80/224 (12 entries)
pcibios0: PCI Exclusive IRQs: 5 9 10 11
pcibios0: no compatible PCI ICU found: ICU vendor 0x8086 product 0x25a1
pcibios0: PCI bus #3 is the last bus
bios0: ROM list: 0xc/0x8000 0xc8000/0x1000 0xc9000/0x1000
cpu0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 "Intel 82875P Host" rev 0x02
ppb0 at pci0 dev 3 function 0 "Intel 82875P PCI-CSA" rev 0x02
pci1 at ppb0 bus 1
em0 at pci1 dev 1 function 0 "Intel PRO/1000CT (82547GI)" rev 0x00:  
irq 5, address: 00:30:48:84:ac:3c

ppb1 at pci0 dev 28 function 0 "Intel 6300ESB PCIX" rev 0x02
pci2 at ppb1 bus 2
skc0 at pci2 dev 1 function 0 "Schneider & Koch SK-9821 v2.0" rev  
0x20: irq 9

skc0: Marvell Yukon (0x1)
sk0 at skc0 port A: address 00:00:5a:9f:31:b4
eephy0 at sk0 phy 0: Marvell 88E1011 Gigabit PHY, rev. 3
uhci0 at pci0 dev 29 function 0 "Intel 6300ESB USB" rev 0x02: irq 10
usb0 at uhci0: USB revision 1.0
uhub0 at usb0
uhub0: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1 at pci0 dev 29 function 1 "Intel 5300ESB USB" rev 0x02: irq 11
usb1 at uhci1: USB revision 1.0
uhub1 at usb1
uhub1: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
"Intel 6300ESB WDT" rev 0x02 at pci0 dev 29 function 4 not configured
"Intel 6300ESB APIC" rev 0x02 at pci0 dev 29 function 5 not configured
ehci0 at pci0 dev 29 function 7 "Intel 6300ESB USB" rev 0x02: irq 9
usb2 at ehci0: USB revision 2.0
uhub2 at usb2
uhub2: Intel EHCI root hub, rev 2.00/1.00, addr 1
uhub2: 4 ports with 4 removable, self powered
ppb2 at pci0 dev 30 function 0 "Intel 82801BA AGP" rev 0x0a
pci3 at ppb2 bus 3
vga1 at pci3 dev 9 function 0 "ATI Rage XL" rev 0x27
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
em1 at pci3 dev 10 function 0 "Intel PRO/1000MT (82541GI)" rev 0x00:  
irq 11, address: 00:30:48:84:ac:3d

ichpcib0 at pci0 dev 31 function 0 "Intel 6300ESB LPC" rev 0x02
pciide0 at pci0 dev 31 function 1 "Intel 6300ESB IDE" rev 0x02: DMA,  
channel 0 configured to compatibility, channel 1 configured to  
compatibility

pciide0: channel 0 disabled (no drives)
wd0 at pciide0 channel 1 drive 0: 
wd0: 4-sector PIO, LBA, 488MB, 1000944 sectors
wd0(pciide0:1:0): using PIO mode 4
"Intel 6300ESB SMBus" rev 0x02 at pci0 dev 31 function 3 not configured
isa0 at ichpcib0
isadma0 at isa0
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pmsi0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pmsi0 mux 0
pcppi0 at isa0 port 0x61
midi0 at pcppi0: 
spkr0 at pcppi0
sysbeep0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
lm0 at isa0 port 0x290/8: W83627HF
npx0 at isa0 port 0xf0/16: using exception 16
pccom0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pccom1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
biomask ef45 netmask 

ping: sendto: No buffer space available

2006-02-15 Thread Oliver J. Morais
i386, OpenBSD 3.9-beta (GENERIC) #597: Sun Feb  5 21:14:35 MST 2006

Just played around pinging to see the following:

Pinging from box A (10.0.0.13) to box B (10.0.0.5) with
"sudo ping -f -s 1024 10.0.0.5"
Everything fine. Fire up another xterm, fire up the same ping a
second time -> wow.

[...]
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
....ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
..ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
[...]

And so on... As soon as I kill one floodping, the other runs fine
again. Box A has a fxp0, box B a re0, connected via a cheap Gigabit Switch.
(Just drop a line if you need the full dmesg.)

Not that I'm too concerned since this isn't a "real world problem" to
me, but hey, who knows ;-)

kind regards,
oliver