Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Marko Cupać
On Wed, 9 Mar 2016 21:28:10 +0200
Mihai Popescu  wrote:

> > -
> > queue download on $if_int bandwidth 10M max 10M
> > queue ssh  parent download bandwidth 1M
> > queue web  parent download bandwidth 8M
> > queue bulk parent download bandwidth 1M default
> >
> > match to   port sshset queue ssh
> > match from port sshset queue ssh
> > match to   port { 80 443 } set queue web
> > match from port { 80 443 } set queue web
> > -
>
> Pardon me, but are you assigning by this both in and out ssh packets
> to the same ssh queue?
>

I do. I noticed, by monitoring systat queues, that assigning queue to
request packet that enters internal interface, puts corresponding reply
traffic into that queue, which is logical if we take into account that
by default a request creates state, and - by means of keeping state
- reply is the same state. If I assign queue only to traffic leaving
internal interface, only requests initiated from the Internet (the
ones that create states) are put into that queue, which is never in my
case - I don't have ssh server on my LAN to which I grant access from
the Internet. Moreover, this would not be what I am trying to achieve -
I want to improve my experience of connecting to ssh servers on the
Internet, so that typing in remote ssh session does not lag because I
have active http transfers which eat up all my bandwidth.
--
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Marko Cupać
On Thu, 10 Mar 2016 13:28:11 +1100
Darren Tucker  wrote:

> On Thu, Mar 10, 2016 at 1:38 AM, Marko Cupać 
> wrote: [...]
> > queue download on $if_int bandwidth 10M max 10M
>
> What's $if_int set to?
>
> I played with queueing recently and initially used interface group
> names instead of interface names ("queue foo on egress ...") since
> that's how the rest of my rules are written but while the ruleset
> loads fine it doesn't actually do anything because queues must be
> assigned to real interface names (quoth pf.conf(5): "The root queue
> must specifically reference an interface")

Thanks for pointing this important information out, perhaps it will help
someone else, but in my case:

# INTERFACE MACROS
if_int= "re0"
if_ext= "pppoe0"

Regards,
--
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Darren Tucker
On Thu, Mar 10, 2016 at 1:38 AM, Marko Cupać  wrote:
[...]
> queue download on $if_int bandwidth 10M max 10M

What's $if_int set to?

I played with queueing recently and initially used interface group
names instead of interface names ("queue foo on egress ...") since
that's how the rest of my rules are written but while the ruleset
loads fine it doesn't actually do anything because queues must be
assigned to real interface names (quoth pf.conf(5): "The root queue
must specifically reference an interface")

--
Darren Tucker (dtucker at zip.com.au)
GPG key 8FF4FA69 / D9A3 86E9 7EEE AF4B B2D4  37C9 C982 80C7 8FF4 FA69
Good judgement comes with experience. Unfortunately, the experience
usually comes from bad judgement.



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Christopher Sean Hilton
On Wed, Mar 09, 2016 at 02:45:36PM -0700, Daniel Melameth wrote:
> On Wed, Mar 9, 2016 at 10:58 AM, Christopher Sean Hilton
>  wrote:
> > I'm using queuing to alleviate bufferbloat and make my son's gaming
> > performance better. I'm on an asymetric cablemodem connection here in
> > the U.S. My download is 100M and my upload is 40M. I use a queue
> > definition similar to this:
> >
> >  queue ext_iface on $ext_if bandwidth 1000M max 1000M qlimit 512
> 
> This will mostly be a no op.  Your max MUST be at or below your real
> bandwidth (not interface bandwidth) and your child queues will need to
> reflect this accordingly.
> 

For me that no-op line is a reminder of what you are working
with. It's also a reflection of a weird situation that I once tested
with.

> > I'm trying to limit the bufferbloat so the depth of the queue is very
> > important. I chose values for qlimit that keep the amount of time that
> > a packet would traverse a queue down at the 0.015ms range:
> >
> >  40Mbit/s / ( 8 bit/byte * 1500 byte/packet) * 0.015s = 50 packets
> >
> > I used 48 because I'm keen on multiples of 16.
> 
> This will be difficult to get right with pf.  Does the game always use
> 1500 byte packets?  Ultimately you'll want a small queue limit (expect
> to see more dropped packets).
> 

That's just an example. In my case I derived the actual packet size
and queue depth by running "systat queue".

Thanks for the advice
-- 
Chris

  __o  "All I was trying to do was get home from work."
_`\<,_   -Rosa Parks
___(*)/_(*).___o..___..o...ooO..._
Christopher Sean Hilton[chris/at/vindaloo/dot/com]



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Daniel Melameth
On Wed, Mar 9, 2016 at 10:58 AM, Christopher Sean Hilton
 wrote:
> I'm using queuing to alleviate bufferbloat and make my son's gaming
> performance better. I'm on an asymetric cablemodem connection here in
> the U.S. My download is 100M and my upload is 40M. I use a queue
> definition similar to this:
>
>  queue ext_iface on $ext_if bandwidth 1000M max 1000M qlimit 512

This will mostly be a no op.  Your max MUST be at or below your real
bandwidth (not interface bandwidth) and your child queues will need to
reflect this accordingly.

> I'm trying to limit the bufferbloat so the depth of the queue is very
> important. I chose values for qlimit that keep the amount of time that
> a packet would traverse a queue down at the 0.015ms range:
>
>  40Mbit/s / ( 8 bit/byte * 1500 byte/packet) * 0.015s = 50 packets
>
> I used 48 because I'm keen on multiples of 16.

This will be difficult to get right with pf.  Does the game always use
1500 byte packets?  Ultimately you'll want a small queue limit (expect
to see more dropped packets).



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Mihai Popescu
> -
> queue download on $if_int bandwidth 10M max 10M
> queue ssh  parent download bandwidth 1M
> queue web  parent download bandwidth 8M
> queue bulk parent download bandwidth 1M default
>
> match to   port sshset queue ssh
> match from port sshset queue ssh
> match to   port { 80 443 } set queue web
> match from port { 80 443 } set queue web
> -

Pardon me, but are you assigning by this both in and out ssh packets
to the same ssh queue?



Re: unbound eats up buffer space

2016-03-09 Thread Christopher Sean Hilton
On Wed, Mar 09, 2016 at 02:04:10PM +0100, Marko Cupać wrote:
> On Tue, 8 Mar 2016 12:24:59 +0100
> Otto Moerbeek  wrote:
>
> > Give unbound more file descriptors; put in login.conf:
> It's already there, by default on 5.8.
>
> > And do not forget to set the class of the user _unbound to unbound:
> It's already set by default on 5.8.
>
>
> On Tue, 8 Mar 2016 07:36:06 -0600
> Brian Conway  wrote:
>
> > Are you using pf queues? I most frequently see that happen when
> > there's no space left in a queue. `pfctl -v -s queue`
> That's probably it. I am going to try to create separate queue for dns
> traffic originating from the firewall.

I saw this on one of my machines. Correctly or incorrectly, I deduced
that it was caused by unbound losing the ability to send a packet on
its interface after a dhclient controlled interface state
transition. These transitions happened at dhcp lease renew time. I run
isc_bind behind a cablemodem and had the same issue there. Isc_bind
listens at each interface individually:

 $ netstat -an | grep "\.53 "
 tcp  0  0  169.254.0.1.53 *.*
LISTEN
 tcp  0  0  127.0.0.1.53   *.*
LISTEN
 udp  0  0  169.254.0.1.53 *.*
 udp  0  0  127.0.0.1.53   *.*

Rather than:

 $ netstat -an | grep "\.53 "
 tcp  0  0  *.53   *.*
LISTEN
 udp  0  0  *.53   *.*

For isc_bind at least, when dhclient renewed the ip address, the
listening socket at 169.254.0.1:53 became invalid and the query socket
at 169.254.0.1:53 couldn't send packets.

YMMV

--
Chris

  __o  "All I was trying to do was get home from work."
_`\<,_   -Rosa Parks
___(*)/_(*).___o..___..o...ooO..._
Christopher Sean Hilton[chris/at/vindaloo/dot/com]

[demime 1.01d removed an attachment of type application/pgp-signature which had 
a name of signature.asc]



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Christopher Sean Hilton
On Wed, Mar 09, 2016 at 03:38:30PM +0100, Marko Cupać wrote:
> Hi,
>

[... snip ...]

I've also been trying to get help with queuing. Perhaps we can help
each other out.

I'm using queuing to alleviate bufferbloat and make my son's gaming
performance better. I'm on an asymetric cablemodem connection here in
the U.S. My download is 100M and my upload is 40M. I use a queue
definition similar to this:

 queue ext_iface on $ext_if bandwidth 1000M max 1000M qlimit 512
   queue download  parent ext_iface bandwidth 120M max 120M qlimit 128
default
   queue ext_extra parent ext_iface bandwidth 880M max 880M qlimit 384

 queue int_iface on $int_if bandwidth 1000M max 1000M qlimit 512
   queue upload   parent int_iface bandwidth  40M max  40M qlimit 48
   queue int_internal parent int_iface bandwidth 960M max 960M qlimit 464

I found several things. Firstly, I found that all queues seem to have
an implied parent queue that based on their interface with a bandwidth of
their
interface speed. Thus:

 queue download on $ext_if bandwith 120M default

really meant:

 queue download on $ext_if bandwidth 120M max 1000M default

hence my specification of the interface queue.

I'm trying to limit the bufferbloat so the depth of the queue is very
important. I chose values for qlimit that keep the amount of time that
a packet would traverse a queue down at the 0.015ms range:

 40Mbit/s / ( 8 bit/byte * 1500 byte/packet) * 0.015s = 50 packets

I used 48 because I'm keen on multiples of 16.

Have you tried anything like this?

--
Chris

  __o  "All I was trying to do was get home from work."
_`\<,_   -Rosa Parks
___(*)/_(*).___o..___..o...ooO..._
Christopher Sean Hilton[chris/at/vindaloo/dot/com]

[demime 1.01d removed an attachment of type application/pgp-signature which had 
a name of signature.asc]



Re: how to submit bug report regarding pf queueing?

2016-03-09 Thread Christopher Sean Hilton
On Wed, Mar 09, 2016 at 03:38:30PM +0100, Marko Cupać wrote:
> Hi,
>

[ ...snip... ]

> So, what exactly do I need to do to submit bug report? Any outputs of
> any commands? Logs? I understand developers won't take my word for it,
> but I simply don't know how to prove it, except watching output of
> systat queues and monitoring queue bandwidth in real time.
> 

You can use the sendbug(1) utility to report bugs to the project. As
far as bugs in queuing go, I think that it's going to be a hard report
to write well.

-- 
Chris

  __o  "All I was trying to do was get home from work."
_`\<,_   -Rosa Parks
___(*)/_(*).___o..___..o...ooO..._
Christopher Sean Hilton[chris/at/vindaloo/dot/com]



how to submit bug report regarding pf queueing?

2016-03-09 Thread Marko Cupać
Hi,

Over last few months, in a few separate threads here on misc@, I have
been trying to call attention to the fact that pf queueing mechanism
does not shape traffic as it should, at least on my APU box.

It took me some time to test hundreds of possible configurations on 5.8,
both amd64 and i386, and I have come to the conclusion it definitely
can't do even simple shaping, like splitting parent queue to 3 child
queues, giving each one of them maximum bandwidth of parent queue
when other queues are empty, but throttling them appropriately when
other queues request their bandwidth:

-
queue download on $if_int bandwidth 10M max 10M
 queue ssh  parent download bandwidth 1M
 queue web  parent download bandwidth 8M
 queue bulk parent download bandwidth 1M default

match to   port sshset queue ssh
match from port sshset queue ssh
match to   port { 80 443 } set queue web
match from port { 80 443 } set queue web
-

In above configuration, ftp transfer (bulk, default) never gets
throttled so that http transfer (web) gets its 8M when it kicks in.

If OpenBSD didn't have all these great, elegantly configurable daemons
such as carp/pfsync, openbgpd, openospfd, ipsec and npppd, which have
been serving me for a decade without problems, I wouldn't have been
returning here asking for help, even though I had been repeatedly
ignored, and sometimes even insulted for doing it. I would look for
alternatives instead. But I know there aren't (m)any of them that can
combine all these functionalities in single, stable, secure, elegant OS.

So, what exactly do I need to do to submit bug report? Any outputs of
any commands? Logs? I understand developers won't take my word for it,
but I simply don't know how to prove it, except watching output of
systat queues and monitoring queue bandwidth in real time.

Regards,
--
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: bgpd network connected

2016-03-09 Thread Matt Schwartz
It looks like I spoke too soon because routes are not being added at all
when using rdomains. It doesn't matter if I use network inet connected or
specify network x.x.x.x/x, the rib comes up empty for the rdomain. Bgpctl
won't let you inject routes into a different routing table other than the
default. Frustrating because I'm so close to getting BGP MPLS VPN to work.
Of course it still could be me but I've looked at this 6 ways to Saturday
and I'm at a loss.

> On Mar 9, 2016 6:00 AM, "Tony Sarendal" wrote:
>
> >
> >
> > 2016-03-08 15:38 GMT+01:00 Matt Schwartz:
> >>
> >> I did not even know it was broken?
> >>
> >> On Mar 8, 2016 1:26 AM, "Tony Sarendal" wrote:
> >> >
> >> > Is there any chance of getting "network inet connected" fixed to 5.9
?
> >> >
> >> > Regards Tony
> >>
> >
> >
> > Adding a new vlan interface:
> >
> > beer# cat /etc/bgpd.conf
> > AS 65001
> > network inet connected
> > beer# bgpctl show rib
> > flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
> > origin: i = IGP, e = EGP, ? = Incomplete
> >
> > flags destination  gateway  lpref   med aspath origin
> > AI*>  172.29.1.0/240.0.0.0100 0 i
> > beer# ifconfig vlan69 create
> > beer# ifconfig vlan69 1.1.1.1/30 vlandev em0 vlan 69 up
> > beer# bgpctl show rib
> > flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
> > origin: i = IGP, e = EGP, ? = Incomplete
> >
> > flags destination  gateway  lpref   med aspath origin
> > AI*>  172.29.1.0/240.0.0.0100 0 i
> > beer# /etc/rc.d/bgpd restart
> > beer# bgpctl show rib
> > flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
> > origin: i = IGP, e = EGP, ? = Incomplete
> >
> > flags destination  gateway  lpref   med aspath origin
> > AI*>  1.1.1.0/30   0.0.0.0100 0 i
> > AI*>  172.29.1.0/240.0.0.0100 0 i
> > beer#
> >
> >
> > Regards Tony



Re: Upgrade to 5.8 broke equal-cost multipath configuration

2016-03-09 Thread Kevin Chadwick
> I did fill a bug report, but just in case someone is interrested, I think the
> issue is
> in rtable_match() (rtable.c)
> 
> Instead of using the dest address to compute the hash that is used to choose
> the route, that function uses the radix node dest address (which is always 0
> in
> my case as it represents the default route).
> 

I'm not sure if it is related but round robin trunk on two dc ethernet
cards seemed to break for me upon upgrade to 5.8.

-- 

KISSIS - Keep It Simple So It's Securable



Re: unbound eats up buffer space

2016-03-09 Thread Marko Cupać
On Tue, 8 Mar 2016 12:24:59 +0100
Otto Moerbeek  wrote:

> Give unbound more file descriptors; put in login.conf:
It's already there, by default on 5.8.

> And do not forget to set the class of the user _unbound to unbound:
It's already set by default on 5.8.


On Tue, 8 Mar 2016 07:36:06 -0600
Brian Conway  wrote:

> Are you using pf queues? I most frequently see that happen when
> there's no space left in a queue. `pfctl -v -s queue`
That's probably it. I am going to try to create separate queue for dns
traffic originating from the firewall.
--
Before enlightenment - chop wood, draw water.
After  enlightenment - chop wood, draw water.

Marko Cupać
https://www.mimar.rs/



Re: bgpd network connected

2016-03-09 Thread Tony Sarendal
2016-03-08 15:38 GMT+01:00 Matt Schwartz :

> I did not even know it was broken?
>
> On Mar 8, 2016 1:26 AM, "Tony Sarendal" wrote:
> >
> > Is there any chance of getting "network inet connected" fixed to 5.9 ?
> >
> > Regards Tony
>
>

Adding a new vlan interface:

beer# cat /etc/bgpd.conf
AS 65001
network inet connected
beer# bgpctl show rib
flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
origin: i = IGP, e = EGP, ? = Incomplete

flags destination  gateway  lpref   med aspath origin
AI*>  172.29.1.0/240.0.0.0100 0 i
beer# ifconfig vlan69 create
beer# ifconfig vlan69 1.1.1.1/30 vlandev em0 vlan 69 up
beer# bgpctl show rib
flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
origin: i = IGP, e = EGP, ? = Incomplete

flags destination  gateway  lpref   med aspath origin
AI*>  172.29.1.0/240.0.0.0100 0 i
beer# /etc/rc.d/bgpd restart
beer# bgpctl show rib
flags: * = Valid, > = Selected, I = via IBGP, A = Announced, S = Stale
origin: i = IGP, e = EGP, ? = Incomplete

flags destination  gateway  lpref   med aspath origin
AI*>  1.1.1.0/30   0.0.0.0100 0 i
AI*>  172.29.1.0/240.0.0.0100 0 i
beer#


Regards Tony