On 28.11.2017 09:19, Stuart Henderson wrote:
Correct that "no state" is not needed (and generally not wanted -
states are more efficient for existing traffic flows, automatically
match ICMP messages that directly relate to the flow, and validate TCP
sequence numbers).
The problem with the rules you've shown is the different names for the
"in" and "out" queues. There's one state table entry for the
connection, not separate ones for in+out.
Use the same queue name instead, something like this:
queue root on $dmzIf bandwidth 12M
queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
..
queue root on $extIf bandwidth 1M
queue qWeb on $extIf parent root bandwidth 300K min 30K
..
match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio
(4,5)
Though, "set prio" won't do much here unless the Ethernet interface
bandwidth (not the queue bandwidth) is maxed out.
Stuart,
here's the detail of what happened with your suggestion each queue after
one curl call.
This snippet
queue root on $dmzIf bandwidth 12M
queue qDef on $dmzIf parent root bandwidth 11M default
queue qWeb on $dmzIf parent root bandwidth 1M
queue root on $extIf bandwidth 1M
queue qDef on $extIf parent root bandwidth 700K default
queue qWeb on $extIf parent root bandwidth 300K
match proto tcp to $webserver port 80 set queue qWeb
pass in log on $extIf proto tcp to $webserver port $webports
pass out log on $dmzIf proto tcp to $webserver port $webports
Results in
pfctl -vf /etc/pf.conf'
extIf = "em0"
dmzIf = "vether1"
webserver = "192.168.7.2"
webports = "80"
queue root on vether1 bandwidth 12M
queue qDef parent root bandwidth 11M default
queue qWeb parent root bandwidth 1M
queue root on em0 bandwidth 1M
queue qDef parent root bandwidth 700K default
queue qWeb parent root bandwidth 300K
match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags
S/SA
# pfctl -vqs queue
queue root on vether1 bandwidth 12M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qDef parent root bandwidth 11M default
[ pkts: 17 bytes: 5432 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qWeb parent root bandwidth 1M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue root on em0 bandwidth 1M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qDef parent root bandwidth 700K default
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qWeb parent root bandwidth 300K
[ pkts: 20 bytes: 22968 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
Then this
queue root on $dmzIf bandwidth 12M
queue qDef on $dmzIf parent root bandwidth 11M default
queue qWeb on $dmzIf parent root bandwidth 1M
queue root on $extIf bandwidth 1M
queue qDef on $extIf parent root bandwidth 700K default
queue qWeb on $extIf parent root bandwidth 300K
match proto tcp to $webserver port 80 set queue qWeb
match proto tcp from $webserver port 80 set queue qWeb
pass in log on $extIf proto tcp to $webserver port $webports
pass out log on $dmzIf proto tcp to $webserver port $webports
Results in
# pfctl -vf /etc/pf.conf
extIf = "em0"
dmzIf = "vether1"
webserver = "192.168.7.2"
webports = "80"
queue root on vether1 bandwidth 12M
queue qDef parent root bandwidth 11M default
queue qWeb parent root bandwidth 1M
queue root on em0 bandwidth 1M
queue qDef parent root bandwidth 700K default
queue qWeb parent root bandwidth 300K
match inet proto tcp from any to 192.168.7.2 port = 80 set ( queue qWeb )
match inet proto tcp from 192.168.7.2 port = 80 to any set ( queue qWeb )
pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA
pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags
S/SA
# pfctl -vqs queue
queue root on vether1 bandwidth 12M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qDef parent root bandwidth 11M default
[ pkts: 24 bytes: 5834 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qWeb parent root bandwidth 1M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue root on em0 bandwidth 1M
[ pkts: 0 bytes: 0 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qDef parent root bandwidth 700K default
[ pkts: 1 bytes: 60 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
queue qWeb parent root bandwidth 300K
[ pkts: 20 bytes: 22968 dropped pkts: 0 bytes: 0 ]
[ qlength: 0/ 50 ]
So all i can control seems to be the response.
Now while one can argue, no need to queue incoming traffic as it already
ate the bandwidth, i would say, what about source quenches for large
upload requests?
Wouldn't these be triggered by queuing the requests?
--
Mit freundlichen Grüßen/Best regards
Mario Theodoridis