Re: unexpected behavior with pf queues (bandwidth allocations seemingly ignored)

2021-07-24 Thread David Dahlberg
Please try first to remove „min“. „Min“ makes it a „real-time service curve“ in 
HFSC terminology, which may react … „unexpectedly“ when exceeded. And you do 
not want „real-time“ properties for file transfer anyways.

> Am 24.07.2021 um 00:21 schrieb Scott Lewandowski :
> 
> I am attempting to prioritize traffic from a particular host. I have the 
> following queue definitions, with this match rule:
> 
> queue rootq on $ext_if bandwidth 13M max 13M
> queue file1_bak parent rootq bandwidth 10M min 8M qlimit 1024
> queue std parent rootq bandwidth 3M min 2M default qlimit 1024
> 
> match from 192.168.1.176 set queue file1_bak
> 
> However, even when the host at .176 has a steady stream of data to output, it 
> is not being prioritized for bandwidth utilization. For example:
> 
> fw0# pfctl -v -sq 
> queue rootq on vmx0 bandwidth 13M, max 13M
>  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
>  [ qlength:   0/ 50 ]
> queue file1_bak parent rootq bandwidth 10M, min 8M qlimit 1024
>  [ pkts:   1279  bytes:1825459  dropped pkts:  0 bytes:  0 ]
>  [ qlength:   0/1024 ]
> queue std parent rootq bandwidth 3M, min 2M default qlimit 1024
>  [ pkts:   8994  bytes:   12333179  dropped pkts:  0 bytes:  0 ]
>  [ qlength:   2/1024 ]
> 
> Even after an extended period of execution, I see similar results. The 
> supposedly prioritized host sees upload speeds of 17-200KB/s, whereas other 
> hosts see 800KB/s or more.
> 
> I do not understand the behavior I am seeing. Why are other hosts being 
> allocated so much bandwidth for uploads? 
> 
> Also of interest is that when I added the queues, a host that reliably used 
> to have consistent 27MB/s downloads now sees variable speeds between 13 and 
> 24MB/s, even when there is no other (meaningful) network activity. I'm not 
> sure if this is related to the upload speed issue I am seeing. I realize 
> there is outbound control traffic from the downloading host, but I can't 
> imagine that should be impacted by the queues when there is no other 
> meaningful network traffic. To try to address the download issue, I've 
> experimented with adding a control traffic queue and assigning traffic to 
> (std, ctrl), but that hasn't helped (in fact, it's hurt).
> 
> Based on some past threads I've read on related issues, I've tried adding 
> "max" specifications to each queue, but that hasn't helped, and it doesn't 
> seem it should be necessary based on the docs. Oddly, if I specify a max of 
> 13 on each rule -- with no suffix, which I accidentally did -- I seem to get 
> the desired behavior, but in that case pf obviously isn't enforcing the max 
> correctly, and I also see download speeds of less than 1KB/s. Adding the 
> intended suffix gives the same observable behavior as I saw without the max 
> specifier at all. 
> 
> I am running up-to-date 6.9 on ESX 6.7 with vmxnet3 vNICs. The VM has 2 vCPUs 
> and 1G and is showing no sign of resource constraints.
> 
> Any help or thoughts would be appreciated!
> 



Re: unexpected behavior with pf queues (bandwidth allocations seemingly ignored)

2021-07-24 Thread Scott Lewandowski
Hi David, thanks for your reply. I had initially tried the rule without a max 
or min specification, and was not seeing the desired behavior. I just 
reconfirmed that using the rules without a min specified exhibits the 
unexpected behavior:

queue rootq on $ext_if bandwidth 13M max 13M
queue file1_bak parent rootq bandwidth 10M qlimit 1024
queue std parent rootq bandwidth 3M default qlimit 1024

fw0# pfctl -v -sq 
queue rootq on vmx0 bandwidth 13M, max 13M
  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
  [ qlength:   0/ 50 ]
queue file1_bak parent rootq bandwidth 10M qlimit 1024
  [ pkts:   6719  bytes:9497288  dropped pkts:  0 bytes:  0 ]
  [ qlength:   0/1024 ]
queue std parent rootq bandwidth 3M default qlimit 1024
  [ pkts:  75265  bytes:  103153102  dropped pkts:  0 bytes:  0 ]
  [ qlength:   9/1024 ]


> -Original Message-
> From: David Dahlberg 
> Sent: Saturday, July 24, 2021 8:47 AM
> To: Scott Lewandowski 
> Cc: misc@openbsd.org
> Subject: Re: unexpected behavior with pf queues (bandwidth allocations
> seemingly ignored)
> 
> Please try first to remove „min“. „Min“ makes it a „real-time service curve“ 
> in
> HFSC terminology, which may react … „unexpectedly“ when exceeded. And
> you do not want „real-time“ properties for file transfer anyways.
> 
> > Am 24.07.2021 um 00:21 schrieb Scott Lewandowski
> :
> >
> > I am attempting to prioritize traffic from a particular host. I have the
> following queue definitions, with this match rule:
> >
> > queue rootq on $ext_if bandwidth 13M max 13M queue file1_bak parent
> > rootq bandwidth 10M min 8M qlimit 1024 queue std parent rootq
> > bandwidth 3M min 2M default qlimit 1024
> >
> > match from 192.168.1.176 set queue file1_bak
> >
> > However, even when the host at .176 has a steady stream of data to
> output, it is not being prioritized for bandwidth utilization. For example:
> >
> > fw0# pfctl -v -sq
> > queue rootq on vmx0 bandwidth 13M, max 13M
> >  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
> >  [ qlength:   0/ 50 ]
> > queue file1_bak parent rootq bandwidth 10M, min 8M qlimit 1024
> >  [ pkts:   1279  bytes:1825459  dropped pkts:  0 bytes:  0 ]
> >  [ qlength:   0/1024 ]
> > queue std parent rootq bandwidth 3M, min 2M default qlimit 1024
> >  [ pkts:   8994  bytes:   12333179  dropped pkts:  0 bytes:  0 ]
> >  [ qlength:   2/1024 ]
> >
> > Even after an extended period of execution, I see similar results. The
> supposedly prioritized host sees upload speeds of 17-200KB/s, whereas
> other hosts see 800KB/s or more.
> >
> > I do not understand the behavior I am seeing. Why are other hosts being
> allocated so much bandwidth for uploads?
> >
> > Also of interest is that when I added the queues, a host that reliably used
> to have consistent 27MB/s downloads now sees variable speeds between 13
> and 24MB/s, even when there is no other (meaningful) network activity. I'm
> not sure if this is related to the upload speed issue I am seeing. I realize 
> there
> is outbound control traffic from the downloading host, but I can't imagine
> that should be impacted by the queues when there is no other meaningful
> network traffic. To try to address the download issue, I've experimented
> with adding a control traffic queue and assigning traffic to (std, ctrl), but 
> that
> hasn't helped (in fact, it's hurt).
> >
> > Based on some past threads I've read on related issues, I've tried adding
> "max" specifications to each queue, but that hasn't helped, and it doesn't
> seem it should be necessary based on the docs. Oddly, if I specify a max of 13
> on each rule -- with no suffix, which I accidentally did -- I seem to get the
> desired behavior, but in that case pf obviously isn't enforcing the max
> correctly, and I also see download speeds of less than 1KB/s. Adding the
> intended suffix gives the same observable behavior as I saw without the max
> specifier at all.
> >
> > I am running up-to-date 6.9 on ESX 6.7 with vmxnet3 vNICs. The VM has 2
> vCPUs and 1G and is showing no sign of resource constraints.
> >
> > Any help or thoughts would be appreciated!
> >



unexpected behavior with pf queues (bandwidth allocations seemingly ignored)

2021-07-23 Thread Scott Lewandowski
I am attempting to prioritize traffic from a particular host. I have the 
following queue definitions, with this match rule:

queue rootq on $ext_if bandwidth 13M max 13M
queue file1_bak parent rootq bandwidth 10M min 8M qlimit 1024
queue std parent rootq bandwidth 3M min 2M default qlimit 1024

match from 192.168.1.176 set queue file1_bak

However, even when the host at .176 has a steady stream of data to output, it 
is not being prioritized for bandwidth utilization. For example:

fw0# pfctl -v -sq 
queue rootq on vmx0 bandwidth 13M, max 13M
  [ pkts:  0  bytes:  0  dropped pkts:  0 bytes:  0 ]
  [ qlength:   0/ 50 ]
queue file1_bak parent rootq bandwidth 10M, min 8M qlimit 1024
  [ pkts:   1279  bytes:1825459  dropped pkts:  0 bytes:  0 ]
  [ qlength:   0/1024 ]
queue std parent rootq bandwidth 3M, min 2M default qlimit 1024
  [ pkts:   8994  bytes:   12333179  dropped pkts:  0 bytes:  0 ]
  [ qlength:   2/1024 ]

Even after an extended period of execution, I see similar results. The 
supposedly prioritized host sees upload speeds of 17-200KB/s, whereas other 
hosts see 800KB/s or more.

I do not understand the behavior I am seeing. Why are other hosts being 
allocated so much bandwidth for uploads? 

Also of interest is that when I added the queues, a host that reliably used to 
have consistent 27MB/s downloads now sees variable speeds between 13 and 
24MB/s, even when there is no other (meaningful) network activity. I'm not sure 
if this is related to the upload speed issue I am seeing. I realize there is 
outbound control traffic from the downloading host, but I can't imagine that 
should be impacted by the queues when there is no other meaningful network 
traffic. To try to address the download issue, I've experimented with adding a 
control traffic queue and assigning traffic to (std, ctrl), but that hasn't 
helped (in fact, it's hurt).

Based on some past threads I've read on related issues, I've tried adding "max" 
specifications to each queue, but that hasn't helped, and it doesn't seem it 
should be necessary based on the docs. Oddly, if I specify a max of 13 on each 
rule -- with no suffix, which I accidentally did -- I seem to get the desired 
behavior, but in that case pf obviously isn't enforcing the max correctly, and 
I also see download speeds of less than 1KB/s. Adding the intended suffix gives 
the same observable behavior as I saw without the max specifier at all. 

I am running up-to-date 6.9 on ESX 6.7 with vmxnet3 vNICs. The VM has 2 vCPUs 
and 1G and is showing no sign of resource constraints.

Any help or thoughts would be appreciated!