+1 (And you can only queue on egress anyway ;)

Just to add something to the prio and queing differences questions..


I thought prio being as simple as it is, was works lower down the stack and works on ingress (i.e. it can cherry pick high prio ingress packets to go up the stack first). Queuing is done at egress which is a looong way away from the ingress..

So whilst the impact may be minimal, if I have a busy firewall (BIG GIANT and all that..) so the CPU is working very hard, I would want prio the prioritize my voice/video packets inwards during ingress and queue on the other side during egress.

Theoretically the packets dropped due to CPU thrashing would be limited to the lower prio packets..?!?

Thoughts/abuse/suggestions :)
Cheers, Andy.



On Sat 31 May 2014 00:39:21 BST, Adam Thompson wrote:
On 14-05-30 05:07 PM, sven falempin wrote:
Just curious. Because TCP got flow and congestion control it should
be possible to reduce the input bandwith of tcp connection even
without controlling the previous hop ???
Yes, but consider a router with 3 interfaces: WAN, LAN1 and LAN2. Let
us assume WAN is a 100Mbps circuit, LAN1 is a gigabit ethernet
connection, and LAN2 is only 10Mbps - perhaps it's an 802.11B WiFi
card in AP mode, or perhaps it's a circuit to a branch office; it
doesn't matter except that it's noticeably slower.

I will ignore NAT for simplicity; AFAIK all the concepts remain valid
regardless.

Now, say you want to reserve some portion of bandwidth for SSH (tcp
port 22, to make things easy).  Perhaps you've decided you want to
allow up to 80Mbps for SSH traffic on the WAN.  (This is a bad policy,
and I'll now explain why.)
We can easily control packets outbound to WAN; this is the common use
case.
Let's say we did the same thing to packets arriving on the WAN
interface, and that's where we cap SSH at 80Mbps.
Note that this does not prevent the entire 100Mbps pipe filling up
with SSH packets - although, as you point out, since that is a TCP
protocol, dropping 20% of the packets will fairly quickly cause that
TCP session to stop saturating the link... but it can still happen
briefly.
What's worse, though, is that that although the WAN is slower than
LAN1, implying that we can (generally) always egress packets to LAN1
as they arrive on the WAN, what do we do with LAN2?  Force-feed 80Mbps
onto an 10Mbps media somehow?  That's impossible.  What happens there
is even without any policing (rate-limiting), we'll be dropping
packets.  Or at least we will if we're pushing more than 10Mbps...

If we instead said the policy was "80% of the connection may be used
for SSH traffic", we would apply rules that apply to packets outbound
to each interface, and each rule would limit traffic to 80% of that
interface's bandwidth.  The actual traffic flows seen by the client
now match our (more flexible) policy.

It is true that in a perfectly symmetric situation, assuming 100%
utilization, it doesn't matter where you drop packets or where you
rate-limit flows; nearly the same effect will occur no matter what.

My point, which I realize I've now addressed from three different
angles without a unifying overview, is that there's no point in
limiting on ingress: the packets are already there whether you choose
to forward them or not.
In the case of TCP, dropping packets on ingress will work, but is like
using a sledgehammer to kill a fly - there are much more subtle ways
to do it that don't break everything nearby.
In the case of UDP, dropping packets may be completely pointless,
depending on the protocol, or it might have a similar effect as TCP.

In either case, applying classification on ingress *for every
interface* and policing on egress *for every interface* will
(generally) give you the flexibility you need without painting
yourself into a corner.

I'm trying to figure out how to formulate my old garden-hose analogy,
but apparently I've forgotten how to make it sound meaningful - stay
tuned.

Reply via email to