ral0: ping: sendto: No buffer space available

2008-10-05 Thread Jurjen Oskam
Hi there,

My ral0 in hostap mode works great, and has been for months. (Except for the
Nintendo Wii I posted about earlier, and I suspect the Wii.)

However, just now I ran into a problem. The ral is used in a server that
also has an Internet-connection. The server does NAT for the wireless
clients. I was uploading some photos from a wireless client to some
Website, and wireless traffic just stopped. The association was still
there, but from the wireless client nothing went through. Not even traffic
to the server itself. I then logged into the machine the ral0 interface
is in (using its console). Nothing in dmesg, but when I tried to ping the
wireless client, I got this:

ping: sendto: No buffer space available
ping: wrote 192.168.2.18 64 chars, ret=-1

Other traffic on the same server (using other interfaces) worked fine.
What can I do to more thoroughly investigate this, should this happen
again? It seems to be a very rare event, since it took months to even occur
once. I didn't think of anything more than to perform some netstat
commands, and check the number of states using pfctl.

netstat -m showed:

718 mbufs in use:
705 mbufs allocated to data
4 mbufs allocated to packet headers
9 mbufs allocated to socket names and addresses
87/128/6144 mbuf clusters in use (current/peak/max)
444 Kbytes allocated to network (79% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

netstat -an showed:

Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
ip 0  0  *.**.*1
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
tcp0  0  192.168.1.1.6161   192.168.1.1.53 TIME_WAIT
tcp0  0  127.0.0.1.735  127.0.0.1.111  TIME_WAIT
tcp0  0  192.168.1.1.48623  192.168.1.1.53 TIME_WAIT
tcp0  0  127.0.0.1.983  127.0.0.1.111  TIME_WAIT
tcp0  0  192.168.1.1.53 192.168.1.1.5394   TIME_WAIT
tcp0  0  192.168.1.1.22 192.168.2.18.59972
ESTABLISHED
tcp0  0  *.25   *.*LISTEN
tcp0  0  127.0.0.1.3493 127.0.0.1.12189
ESTABLISHED
tcp0  0  127.0.0.1.12189127.0.0.1.3493
ESTABLISHED
tcp0  0  *.3493 *.*LISTEN
tcp0  0  127.0.0.1.8026 *.*LISTEN
tcp0  0  *.8025 *.*LISTEN
tcp0  0  *.22   *.*LISTEN
tcp0  0  *.37   *.*LISTEN
tcp0  0  *.13   *.*LISTEN
tcp0  0  *.113  *.*LISTEN
tcp0  0  *.21   *.*LISTEN
tcp0  0  192.168.1.1.53 *.*LISTEN
tcp0  0  82.95.239.113.53   *.*LISTEN
tcp0  0  82.95.239.113.80   *.*LISTEN
tcp0  0  *.653  *.*LISTEN
tcp0  0  *.2049 *.*LISTEN
tcp0  0  *.800  *.*LISTEN
tcp0  0  127.0.0.1.111  *.*LISTEN
tcp0  0  *.111  *.*LISTEN
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state)
udp0  0  82.95.239.113.4619688.191.14.223.123
udp0  0  82.95.239.113.3918878.47.5.66.123
udp0  0  82.95.239.113.24467141.40.103.101.123
udp0  0  82.95.239.113.21396212.13.207.101.123
udp0  0  82.95.239.113.20148194.88.2.88.123
udp0  0  *.5060 *.*
udp0  0  127.0.0.1.512  *.*
udp0  0  *.69   *.*
udp0  0  82.95.239.113.53   *.*
udp0  0  192.168.1.1.53 *.*
udp0  0  127.0.0.1.775  127.0.0.1.2049
udp0  0  192.168.1.1.123*.*
udp0  0  192.168.2.1.123*.*
udp0  0  82.95.239.113.123  *.*
udp0  0  127.0.0.1.123  *.*
udp0  0  *.734  *.*
udp0  0  *.2049 *.*
udp0  0  *.756  *.*
udp0  0  127.0.0.1.111  *.*
udp0  0  *.111  *.*
udp0  0  *.514  *.*
Active Internet connections (including servers)
Proto Recv-Q Send-Q  Local Address  Foreign Address(state

Re: ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-22 Thread Joe Warren-Meeks
On Mon, Jul 21, 2008 at 10:53:23AM -0600, Daniel Melameth wrote:
 On Mon, Jul 21, 2008 at 10:39 AM, Joe Warren-Meeks [EMAIL PROTECTED] wrote:
  The default limit for number of states is quite low. Try adding the
  following to pf.conf and running pfctl -vf /etc/pf.conf
  
  set limit { states 5000, frags 5000, src-nodes 5000 }
  
 
  You can up the values if they are too low.
 
  Use pfctl -s info to view how many entries there are in the state
  table beforehand and compare it to afterwards.
 
 FWIW, the default state and src-nodes limit is twice what you have above.

Oops you are right :-) That was meant to be 5, not 5000. I have mine
set to 500,000, as we have loads of ram and a load of busy sites. 

 -- joe.

It'll cost you many a shilling.



ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-21 Thread Amaury De Ganseman
Hi all,

I run OpenBSD 4.3 on my gateway. But when a machine behind the
NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
For example when I try to do a ping www.google.com I can see ping:
sendto: No buffer space available (on my gateway)
It's the same if I use gtk-gnutella. I think it's related to the huge
number of states  (about 1500 for bittorent)

Any idea to the problem ?

Thanks for help.


Amaury De Ganseman
infos:
-

dmesg in attach

I use user-space pppoe

I use traffic shaping:

#ALTQ CBQ

altq on $ext_if cbq bandwidth 350Kb queue {std}

queue std bandwidth 100% { hprio, lprio}
queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)



gateway# netstat -m
37 mbufs in use:
32 mbufs allocated to data
1 mbuf allocated to packet headers
4 mbufs allocated to socket names and addresses
32/306/6144 mbuf clusters in use (current/peak/max)
708 Kbytes allocated to network (10% in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines

I use user-space pppoe

[demime 1.01d removed an attachment of type application/octet-stream which had 
a name of obsd.dmesg]



Re: ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-21 Thread Daniel Melameth
Does the issue go away when you make ping part of the high priority queue?

On Mon, Jul 21, 2008 at 7:55 AM, Amaury De Ganseman
[EMAIL PROTECTED] wrote:
 Hi all,

 I run OpenBSD 4.3 on my gateway. But when a machine behind the
 NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
 For example when I try to do a ping www.google.com I can see ping:
 sendto: No buffer space available (on my gateway)
 It's the same if I use gtk-gnutella. I think it's related to the huge
 number of states  (about 1500 for bittorent)

 Any idea to the problem ?

 Thanks for help.


 Amaury De Ganseman
 infos:
 -

 dmesg in attach

 I use user-space pppoe

 I use traffic shaping:

 #ALTQ CBQ

 altq on $ext_if cbq bandwidth 350Kb queue {std}

 queue std bandwidth 100% { hprio, lprio}
 queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
 queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)



 gateway# netstat -m
 37 mbufs in use:
32 mbufs allocated to data
1 mbuf allocated to packet headers
4 mbufs allocated to socket names and addresses
 32/306/6144 mbuf clusters in use (current/peak/max)
 708 Kbytes allocated to network (10% in use)
 0 requests for memory denied
 0 requests for memory delayed
 0 calls to protocol drain routines

 I use user-space pppoe



Re: ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-21 Thread Giancarlo Razzolini
Daniel Melameth escreveu:
 Does the issue go away when you make ping part of the high priority queue?

 On Mon, Jul 21, 2008 at 7:55 AM, Amaury De Ganseman
 [EMAIL PROTECTED] wrote:
   
 Hi all,

 I run OpenBSD 4.3 on my gateway. But when a machine behind the
 NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
 For example when I try to do a ping www.google.com I can see ping:
 sendto: No buffer space available (on my gateway)
 It's the same if I use gtk-gnutella. I think it's related to the huge
 number of states  (about 1500 for bittorent)

 Any idea to the problem ?

 Thanks for help.


 Amaury De Ganseman
 infos:
 -

 dmesg in attach

 I use user-space pppoe

 I use traffic shaping:

 #ALTQ CBQ

 altq on $ext_if cbq bandwidth 350Kb queue {std}

 queue std bandwidth 100% { hprio, lprio}
 queue lprio bandwidth 80% priority 0 cbq(default, borrow,red, ecn )
 queue hprio bandwidth 20% priority 7 cbq(borrow, red, ecn)



 gateway# netstat -m
 37 mbufs in use:
32 mbufs allocated to data
1 mbuf allocated to packet headers
4 mbufs allocated to socket names and addresses
 32/306/6144 mbuf clusters in use (current/peak/max)
 708 Kbytes allocated to network (10% in use)
 0 requests for memory denied
 0 requests for memory delayed
 0 calls to protocol drain routines

 I use user-space pppoe
 


   
Take a look at the list archives. I recall someone having the same
issue, and it was something to do with the network card. It was a
hardware problem. I already had this sometimes, and only rebooting the
machine solved the problem.

My regards,

-- 
Giancarlo Razzolini
http://lock.razzolini.adm.br
Linux User 172199
Red Hat Certified Engineer no:804006389722501
Verify:https://www.redhat.com/certification/rhce/current/
Moleque Sem Conteudo Numero #002
OpenBSD Stable
Ubuntu 8.04 Hardy Heron
4386 2A6F FFD4 4D5F 5842  6EA0 7ABE BBAB 9C0E 6B85



Re: ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-21 Thread Joe Warren-Meeks
On Mon, Jul 21, 2008 at 03:55:41PM +0200, Amaury De Ganseman wrote:

Hey there,
 
 I run OpenBSD 4.3 on my gateway. But when a machine behind the
 NAT/gateway uses bittoreent (or gtk-gnutella) I loss packets.
 For example when I try to do a ping www.google.com I can see ping:
 sendto: No buffer space available (on my gateway)
 It's the same if I use gtk-gnutella. I think it's related to the huge
 number of states  (about 1500 for bittorent)

The default limit for number of states is quite low. Try adding the
following to pf.conf and running pfctl -vf /etc/pf.conf

set limit { states 5000, frags 5000, src-nodes 5000 } 


You can up the values if they are too low. 

Use pfctl -s info to view how many entries there are in the state
table beforehand and compare it to afterwards.

HTH.

 -- joe.

Denim is old news. Who wants to look like a member of B*witched?



Re: ping: sendto: No buffer space available when using bittorrent or another p2p

2008-07-21 Thread Daniel Melameth
On Mon, Jul 21, 2008 at 10:39 AM, Joe Warren-Meeks [EMAIL PROTECTED] wrote:
 The default limit for number of states is quite low. Try adding the
 following to pf.conf and running pfctl -vf /etc/pf.conf
 
 set limit { states 5000, frags 5000, src-nodes 5000 }
 

 You can up the values if they are too low.

 Use pfctl -s info to view how many entries there are in the state
 table beforehand and compare it to afterwards.

FWIW, the default state and src-nodes limit is twice what you have above.



ping: sendto: No buffer space available

2006-07-14 Thread Jason Dixon
We have an OpenBSD 3.8 firewall that has been in production for the  
last six months.  Until the last week or two, everything has been  
great.  Recently while diagnosing a problem with the bonded T1 pair,  
I noticed the following error while pinging the gateway:


ping: sendto: No buffer space available

This always coincided with a very high spike (1000-3000ms) in  
latency, which would usually go back down to ~0ms and operate  
normally.  The interface in question is an Intel em connected to a  
Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working  
fine.  The LAN interface (em1) pushes *much* more data, as it routes  
between 13 internal VLANs.  I've also had another box perform the  
same ping test concurrently to confirm this isn't a problem with the  
gateway.


I've seen other posts in the misc@ archives for this error, but the  
only solutions were patches to OpenBSD 3.7 or specific non-em drivers  
(xl, fxp).  Does anyone have an idea what might be causing this?



OpenBSD 3.8 (GENERIC) #138: Sat Sep 10 15:41:37 MDT 2005
[EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC
cpu0: Intel(R) Celeron(R) CPU 2.60GHz (GenuineIntel 686-class) 2.60  
GHz
cpu0:  
FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36, 
CFLUSH,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF,CNXT-ID

real mem  = 266838016 (260584K)
avail mem = 236597248 (231052K)
using 3282 buffers containing 13443072 bytes (13128K) of memory
mainbus0 (root)
bios0 at mainbus0: AT/286+(8b) BIOS, date 03/09/05, BIOS32 rev. 0 @  
0xfb770

apm0 at bios0: Power Management spec V1.2
apm0: AC on, battery charge unknown
apm0: flags 70102 dobusy 1 doidle 1
pcibios0 at bios0: rev 2.1 @ 0xf/0xdf64
pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfde80/224 (12 entries)
pcibios0: PCI Exclusive IRQs: 5 9 10 11
pcibios0: no compatible PCI ICU found: ICU vendor 0x8086 product 0x25a1
pcibios0: PCI bus #3 is the last bus
bios0: ROM list: 0xc/0x8000 0xc8000/0x1000 0xc9000/0x1000
cpu0 at mainbus0
pci0 at mainbus0 bus 0: configuration mode 1 (no bios)
pchb0 at pci0 dev 0 function 0 Intel 82875P Host rev 0x02
ppb0 at pci0 dev 3 function 0 Intel 82875P PCI-CSA rev 0x02
pci1 at ppb0 bus 1
em0 at pci1 dev 1 function 0 Intel PRO/1000CT (82547GI) rev 0x00:  
irq 5, address: 00:30:48:84:ac:3c

ppb1 at pci0 dev 28 function 0 Intel 6300ESB PCIX rev 0x02
pci2 at ppb1 bus 2
skc0 at pci2 dev 1 function 0 Schneider  Koch SK-9821 v2.0 rev  
0x20: irq 9

skc0: Marvell Yukon (0x1)
sk0 at skc0 port A: address 00:00:5a:9f:31:b4
eephy0 at sk0 phy 0: Marvell 88E1011 Gigabit PHY, rev. 3
uhci0 at pci0 dev 29 function 0 Intel 6300ESB USB rev 0x02: irq 10
usb0 at uhci0: USB revision 1.0
uhub0 at usb0
uhub0: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub0: 2 ports with 2 removable, self powered
uhci1 at pci0 dev 29 function 1 Intel 5300ESB USB rev 0x02: irq 11
usb1 at uhci1: USB revision 1.0
uhub1 at usb1
uhub1: Intel UHCI root hub, rev 1.00/1.00, addr 1
uhub1: 2 ports with 2 removable, self powered
Intel 6300ESB WDT rev 0x02 at pci0 dev 29 function 4 not configured
Intel 6300ESB APIC rev 0x02 at pci0 dev 29 function 5 not configured
ehci0 at pci0 dev 29 function 7 Intel 6300ESB USB rev 0x02: irq 9
usb2 at ehci0: USB revision 2.0
uhub2 at usb2
uhub2: Intel EHCI root hub, rev 2.00/1.00, addr 1
uhub2: 4 ports with 4 removable, self powered
ppb2 at pci0 dev 30 function 0 Intel 82801BA AGP rev 0x0a
pci3 at ppb2 bus 3
vga1 at pci3 dev 9 function 0 ATI Rage XL rev 0x27
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
em1 at pci3 dev 10 function 0 Intel PRO/1000MT (82541GI) rev 0x00:  
irq 11, address: 00:30:48:84:ac:3d

ichpcib0 at pci0 dev 31 function 0 Intel 6300ESB LPC rev 0x02
pciide0 at pci0 dev 31 function 1 Intel 6300ESB IDE rev 0x02: DMA,  
channel 0 configured to compatibility, channel 1 configured to  
compatibility

pciide0: channel 0 disabled (no drives)
wd0 at pciide0 channel 1 drive 0: SanDisk SDCFB-512
wd0: 4-sector PIO, LBA, 488MB, 1000944 sectors
wd0(pciide0:1:0): using PIO mode 4
Intel 6300ESB SMBus rev 0x02 at pci0 dev 31 function 3 not configured
isa0 at ichpcib0
isadma0 at isa0
pckbc0 at isa0 port 0x60/5
pckbd0 at pckbc0 (kbd slot)
pckbc0: using irq 1 for kbd slot
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pmsi0 at pckbc0 (aux slot)
pckbc0: using irq 12 for aux slot
wsmouse0 at pmsi0 mux 0
pcppi0 at isa0 port 0x61
midi0 at pcppi0: PC speaker
spkr0 at pcppi0
sysbeep0 at pcppi0
lpt0 at isa0 port 0x378/4 irq 7
lm0 at isa0 port 0x290/8: W83627HF
npx0 at isa0 port 0xf0/16: using exception 16
pccom0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
pccom1 at isa0 port 0x2f8/8 irq 3: ns16550a, 16 byte fifo
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
biomask ef45 netmask ef65 ttymask ffe7
pctr: user-level cycle counter enabled
dkcsum: wd0 matches BIOS drive 0x80
root on wd0a
rootdev=0x0 rrootdev=0x300 rawdev=0x302


--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



Re: ping: sendto: No buffer space available

2006-07-14 Thread Kian Mohageri
On 7/14/06, Jason Dixon [EMAIL PROTECTED] wrote:

 We have an OpenBSD 3.8 firewall that has been in production for the
 last six months.  Until the last week or two, everything has been
 great.  Recently while diagnosing a problem with the bonded T1 pair,
 I noticed the following error while pinging the gateway:

 ping: sendto: No buffer space available

 This always coincided with a very high spike (1000-3000ms) in
 latency, which would usually go back down to ~0ms and operate
 normally.  The interface in question is an Intel em connected to a
 Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working
 fine.  The LAN interface (em1) pushes *much* more data, as it routes
 between 13 internal VLANs.  I've also had another box perform the
 same ping test concurrently to confirm this isn't a problem with the
 gateway.




This is the same behavior I would see when trying to ping out our internal
em(4) interface when the transmit queue filled up (or it was thought to be
full).  You can confirm that is the case by checking ifconfig (look for
OACTIVE).

But, does that interface ever fail completely and require an interface
restart, or just spike?

Kian



Re: ping: sendto: No buffer space available (SOLVED)

2006-07-14 Thread Jason Dixon

On Jul 14, 2006, at 2:32 PM, Kian Mohageri wrote:


On 7/14/06, Jason Dixon [EMAIL PROTECTED] wrote:


We have an OpenBSD 3.8 firewall that has been in production for the
last six months.  Until the last week or two, everything has been
great.  Recently while diagnosing a problem with the bonded T1 pair,
I noticed the following error while pinging the gateway:

ping: sendto: No buffer space available

This always coincided with a very high spike (1000-3000ms) in
latency, which would usually go back down to ~0ms and operate
normally.  The interface in question is an Intel em connected to a
Cisco 2950 trunk.  The other two interfaces (em1, sk0) are working
fine.  The LAN interface (em1) pushes *much* more data, as it routes
between 13 internal VLANs.  I've also had another box perform the
same ping test concurrently to confirm this isn't a problem with the
gateway.


This is the same behavior I would see when trying to ping out our  
internal
em(4) interface when the transmit queue filled up (or it was  
thought to be
full).  You can confirm that is the case by checking ifconfig (look  
for

OACTIVE).


I'm not seeing OACTIVE on em0, vlan1 (vlandev em0) or carp1 (carpdev  
vlan1).



But, does that interface ever fail completely and require an interface
restart, or just spike?


I found the problem.  It ended up being excessive (read: BAD)  
queueing on my part.  In order to further optimize VoIP/VPN traffic  
for some remote offices, I recently removed the borrow option from  
some parent CBQ queues.  This was causing much more packet buffering  
than you would normally see.  It makes me wonder though... how does  
one evaluate the buffers on an interface used by altq?


Thanks,

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



ping: sendto: No buffer space available

2006-02-15 Thread Oliver J. Morais
i386, OpenBSD 3.9-beta (GENERIC) #597: Sun Feb  5 21:14:35 MST 2006

Just played around pinging to see the following:

Pinging from box A (10.0.0.13) to box B (10.0.0.5) with
sudo ping -f -s 1024 10.0.0.5
Everything fine. Fire up another xterm, fire up the same ping a
second time - wow.

[...]
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
.ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
..ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
ping: sendto: No buffer space available
ping: wrote 10.0.0.5 1032 chars, ret=-1
[...]

And so on... As soon as I kill one floodping, the other runs fine
again. Box A has a fxp0, box B a re0, connected via a cheap Gigabit Switch.
(Just drop a line if you need the full dmesg.)

Not that I'm too concerned since this isn't a real world problem to
me, but hey, who knows ;-)

kind regards,
oliver