On 28.11.2017 09:19, Stuart Henderson wrote:
Correct that "no state" is not needed (and generally not wanted - states are more efficient for existing traffic flows, automatically match ICMP messages that directly relate to the flow, and validate TCP sequence numbers).

The problem with the rules you've shown is the different names for the "in" and "out" queues. There's one state table entry for the connection, not separate ones for in+out.

Use the same queue name instead, something like this:

queue root on $dmzIf bandwidth 12M
    queue qWeb on $dmzIf parent root bandwidth 499K min 100K max 1M
..

queue root on $extIf bandwidth 1M
    queue qWeb on $extIf parent root bandwidth 300K min 30K
..

match proto tcp to $webserver port 80 set queue (qWeb, qPri) set prio (4,5)

Though, "set prio" won't do much here unless the Ethernet interface bandwidth (not the queue bandwidth) is maxed out.
Thanks for your responses Stuart.

I tried that, but managed to only get one direction to work.

I must really be missing something here.
In my desperation i tried from "The Book of PF" 3
Chapter 7. Traffic Shaping with Queues and Priorities
  Always-On Priority and Queues for Traffic Shaping
    The DMZ Network, Now with Traffic Shaping

Which looks like this:

/queue ext on $ext_if bandwidth 2M//
//        queue ext_main parent ext bandwidth 500K default//
//        queue ext_web parent ext bandwidth 500K//
//        queue ext_udp parent ext bandwidth 400K//
//        queue ext_mail parent ext bandwidth 600K//
//
//queue dmz on $dmz_if bandwidth 100M//
//        queue ext_dmz parent dmz bandwidth 2M//
//                queue ext_dmz_web parent ext_dmz bandwidth 800K default//
//                queue ext_dmz_udp parent ext_dmz bandwidth 200K//
//                queue ext_dmz_mail parent ext_dmz bandwidth 1M//
//        queue dmz_main parent dmz bandwidth 25M//
//        queue dmz_web parent dmz bandwidth 25M//
//        queue dmz_udp parent dmz bandwidth 20M//
//        queue dmz_mail parent dmz bandwidth 20M/

and the web traffic extract without the internal net is

/pass in on $ext_if proto tcp to $webserver port $webports set queue ext_web//
//pass out on $dmz_if proto tcp to $webserver port $webports \//
//    set queue ext_dmz_web//
/
I had to add nat to make my test environment work.
I also made dmz_main default instead of ext_dmz_web

Here's my pf.conf

# pfctl -vf /etc/pf.conf

ext_if = "em0"

dmz_if = "vether1"

webserver = "192.168.7.2"

webports = "80"

queue ext on em0 bandwidth 2M

queue ext_main parent ext bandwidth 500K default

queue ext_web parent ext bandwidth 500K

queue ext_udp parent ext bandwidth 400K

queue ext_mail parent ext bandwidth 600K

queue dmz on vether1 bandwidth 100M

queue ext_dmz parent dmz bandwidth 2M

queue ext_dmz_web parent ext_dmz bandwidth 800K

queue ext_dmz_udp parent ext_dmz bandwidth 200K

queue ext_dmz_mail parent ext_dmz bandwidth 1M

queue dmz_main parent dmz bandwidth 25M default

queue dmz_web parent dmz bandwidth 25M

queue dmz_udp parent dmz bandwidth 20M

queue dmz_mail parent dmz bandwidth 20M

match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1) 
round-robin

pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags S/SA 
set ( queue ext_web )

pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 flags 
S/SA set ( queue ext_dmz_web )


Initial rule and queue counters

# pfctl -vvqs rules

@0 match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1:2) 
round-robin

  [ Evaluations: 490       Packets: 0         Bytes: 0           States: 0     ]

  [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@1 pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags 
S/SA set ( queue ext_web )

  [ Evaluations: 490       Packets: 0         Bytes: 0           States: 0     ]

  [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@2 pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 
flags S/SA set ( queue ext_dmz_web )

  [ Evaluations: 262       Packets: 0         Bytes: 0           States: 0     ]

  [ Inserted: uid 0 pid 75321 State Creations: 0     ]

# pfctl -vqs queue

queue ext on em0 bandwidth 2M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_main parent ext bandwidth 500K default

  [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_web parent ext bandwidth 500K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_udp parent ext bandwidth 400K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_mail parent ext bandwidth 600K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz on vether1 bandwidth 100M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz parent dmz bandwidth 2M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_web parent ext_dmz bandwidth 800K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_udp parent ext_dmz bandwidth 200K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_mail parent ext_dmz bandwidth 1M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_main parent dmz bandwidth 25M default

  [ pkts:         20  bytes:       6077  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_web parent dmz bandwidth 25M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_udp parent dmz bandwidth 20M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_mail parent dmz bandwidth 20M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]


Then i did one curl http:/..... call from outside of $ext_if to a webserver behind $dmz_if

# tcpdump -e -n -ttt -i pflog0 ip and port 80

tcpdump: WARNING: snaplen raised from 116 to 160

tcpdump: listening on pflog0, link-type PFLOG

Nov 30 21:15:39.431581 rule 1/(match) pass in on em0: 10.0.0.6.54021 > 
192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 
0,nop,wscale 7> (DF)

Nov 30 21:15:39.431628 rule 0/(match) match out on vether1: 192.168.7.14.56560 > 
192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 
0,nop,wscale 7> (DF)

Nov 30 21:15:39.431638 rule 2/(match) pass out on vether1: 192.168.7.14.56560 > 
192.168.7.2.80: S 674821420:674821420(0) win 29200 <mss 1460,sackOK,timestamp 1385678 
0,nop,wscale 7> (DF)

^C

3 packets received by filter

0 packets dropped by kernel


Here are the counters afterwards

# pfctl -vvqs rules

@0 match out log on vether1 inet from 10.0.0.0/24 to any nat-to (vether1:2) 
round-robin

  [ Evaluations: 912       Packets: 33        Bytes: 23306       States: 1     ]

  [ Inserted: uid 0 pid 75321 State Creations: 0     ]

@1 pass in log on em0 inet proto tcp from any to 192.168.7.2 port = 80 flags 
S/SA set ( queue ext_web )

  [ Evaluations: 912       Packets: 33        Bytes: 23306       States: 1     ]

  [ Inserted: uid 0 pid 75321 State Creations: 1     ]

@2 pass out log on vether1 inet proto tcp from any to 192.168.7.2 port = 80 
flags S/SA set ( queue ext_dmz_web )

  [ Evaluations: 474       Packets: 33        Bytes: 23306       States: 1     ]

  [ Inserted: uid 0 pid 75321 State Creations: 1     ]


The rule counter sort of look like what i'd expect, except for maybe the byte count being the same everywhere.

# pfctl -vqs queue

queue ext on em0 bandwidth 2M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_main parent ext bandwidth 500K default

  [ pkts:          1  bytes:         60  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue*ext_web*  parent ext bandwidth 500K

  [ pkts:         20  bytes:*22968*   dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_udp parent ext bandwidth 400K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_mail parent ext bandwidth 600K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz on vether1 bandwidth 100M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz parent dmz bandwidth 2M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_web parent ext_dmz bandwidth 800K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_udp parent ext_dmz bandwidth 200K

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue ext_dmz_mail parent ext_dmz bandwidth 1M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue*dmz_main*  parent dmz bandwidth 25M default

  [ pkts:         34  bytes:*10122*   dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_web parent dmz bandwidth 25M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_udp parent dmz bandwidth 20M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]

queue dmz_mail parent dmz bandwidth 20M

  [ pkts:          0  bytes:          0  dropped pkts:      0 bytes:      0 ]

  [ qlength:   0/ 50 ]


Here i expect the byte counters in /ext_dmz_web/ and /ext_web/ to go up, but only /ext_web/ triggered. So am i reading this wrong or did the request traffic indeed use the default queue /dmz_main/ instead.

I must be missing something.
Clue stick desperately needed.

--
Mit freundlichen Grüßen/Best regards

Mario Theodoridis

Reply via email to