On Fri, May 30, 2014 at 2:15 PM, Adam Thompson <athom...@athompso.net> wrote:
> On 2014-05-30 12:41, Giancarlo Razzolini wrote:
>
>> From my
> experience, if you have an asymmetric link, where your download
>>
>>
> rate is bigger than your upload rate, you can see benefits in putting
>>
> hfsc in front of it. And, the most benefit seems to be on the upload
>>
> side. There are some factors that weigh in, such as router buffers and
>>
> network congestion outside of your own network. Speaking of such, I
> read
>> recently the Codel spec: https://en.wikipedia.org/wiki/CoDel, I
> don't
>> know if it really helps the bufferbloat problem, but this is
> another
>> matter entirely, perhaps Henning could explain better, even if
> it should
>> or not be put into pf.
>>
>> Now, when you have a symmetric
> link with enough bandwidth (10+ MB/s),
>> which by the way, depending on
> the technology used, have little or no
>> buffer at all, them prio will
> generally do the job, even with p2p
>> applications. Just don't forget
> there are always nat involved so you
>> need to prio packets all the way,
> just as you should with hfsc. I find
>> that using tags is the most
> effective way to do so.
>>
>> Cheers,
>
> Provably, it's not just the "most"
> benefit from limiting uploads, it's the "only" benefit. Limiting inbound
> traffic is pointless.
>
> By the time an inbound packet arrives at the
> ethernet interface of your pfSense box, it's far too late to bother
> policing it.
>
> The only time QoS actually does anything is when there is
> resource contention. By definition, resource contention does not occur
> on the receiving end - either you have the horsepower to receive and
> process all the packets or you don't; adding extra CPU steps on every
> received packet will not magically allow you to receive more data if
> your system cannot handle the IRQ load or the bandwidth, or doesn't have
> enough mbufs, or is otherwise underpowered.
>
> Where QoS does its magic
> is when there is too little bandwidth (or too few timeslots) to egress a
> packet *immediately*. If the interface is idle, a higher-priority packet
> will be sent just as fast as the lower-priority packet.
>
> There are two
> ways to influence the behaviour of a downstream device: tagging (whether
> DSCP or 802.1p), and rate-limiting.
>
> If you know the next device in
> stream (a DSL modem, say) can only upload at 768Kbit/sec, and you very
> carefully only ever send it 750Kbits/sec of traffic, you remain in
> control of what packets get sent out first. As soon as you start filling
> its buffer (say, by allowing bursts of 10Mbit/sec traffic), the modem is
> now in control of what packets to send first, and you typically have no
> idea if it's obeying your 802.1p or DSCP markings.
>
> HFSC does a good
> job of rate-limiting (the 2nd case) so that the dumber device never has
> to make any decisions of its own.
>
> In the meantime, please stop
> applying rate-limiting on inbound packets - it's pointless. If you have
> a resource-constrained LAN or DMZ interface (e.g. 1Mbps WiFi or maybe
> Bluetooth PAN, or maybe you have a 100Mbit internet connection but only
> a 10Mbit LAN?) then the way to solve that is to apply QoS policies on
> the outbound packets as they leave the router and enter the slower
> network.
>
> Generally, QoS classification (i.e. tagging) should happen on
> ingress, and policing (i.e. rate-limiting) should happen on egress.
>
> If
> you don't agree with this, please 1) demonstrate that it does make a
> difference, and then 2) let's figure out why setting QoS on ingress
> makes a difference, because that violates... well... everything. The
> theoretical basis for this today is pretty solid; I'm prepared to
> believe there are implementation-specific exceptions, but they should
> get rooted out and eliminated.
>
> The only general exception I'm aware of
> currently is where an intermediate traffic plane cannot handle all the
> ingress traffic flowing over it, in which case QoS more or less consists
> of "selectively drop on ingress", not "rate-limit". This is bad
> architecture. Even cheap switches are non-blocking inside the switch
> fabric nowadays. However, this is why Cisco still documents QoS
> rate-limiting *on ingress* for many of their large L2/L3 switching
> platforms... (RED/WRED can be an example of this in some
> architectures.)
> This exception does not apply to any pf implementations
> that I know of.
>
> The best explanation of this I've seen is in
> O'Reilly's "Juniper MX Series" book, which spends a ridiculous amount of
> time (4 chapters, IIRC) explaining how Juniper MX routers implement
> queuing theory in hardware.
>
> I am aware that this message contains a
> very shallow treatment of QoS theory, there are numerous edge cases
> where complex policies on ingress are warranted; but if you're just
> building a pf policy, setting inbound VoIP traffic to a high priority
> does NOT magically make your upstream provider send you VoIP packets
> with high priority - you don't control their behaviour from your local
> pf.conf!
>
> -Adam Thompson
>  athom...@athompso.net
>


Just curious.
Because TCP got flow and congestion control it should be possible to
reduce the input bandwith of tcp connection even without controlling
the previous hop ???


-- 
---------------------------------------------------------------------------------------------------------------------
() ascii ribbon campaign - against html e-mail
/\

Reply via email to