Geoffrey S. Mendelson wrote:

>On Mon, Jan 23, 2006 at 04:32:16PM +0200, Shachar Shemesh wrote:
>
>  
>
>>No, this is plain wrong. TCP employs a very powerful congestion control.
>>When it notices that the connection exceeded its allocated bandwidth, it
>>will lower the transmission rate, thus eliminating the need to drop
>>further packets. This will also clear the ISP's buffers.
>>    
>>
>
>Allocated bandwidth implies that something or someone has set that
>allocation. Where is this allocation set and this mechanism occur? 
>  
>
If you employ QoS on your line - you. If not, then the "allocated
bandwidth" is the bandwidth that does not cause buffers to fill and
packets to drop. Large buffers (such as those that exist both on your
end and on the ISP end of an ADSL connection) increase the effective
burst bandwidth, but do not affect the actual bandwidth.

>Is it present in all implementations? I assume you mean that Linux's
>implementation of TCP includes it.
>
No. It's present in BSD 4.4 public domain implementation, as well as
Linux's. It is defined by the TCP RFCs. It is fairly safe to assume that
all but the most brain dead TCP/IP stacks will have congestion control.
It's simply something TCP cannot really function without.

> Where does it apply?
>
It applies to any and all TCP/IP connections, individually. When a new
TCP connection is established, TCP employs an algorithm called "slow
start", where it starts with a minimal bandwidth, and keeps doubling it
as packets are successfully received. When packets are first lost, TCP
switches to a linear mode, where each received packet slightly increases
the bandwidth TCP allows itself to load on that particular connection,
and each dropped packet decreases it.

At the same time, TCP constantly measures the latency of the connection.
This allows TCP to estimate when an ACK is supposed to arrive, and thus
perform retransmits accordingly.

This means that a policy that drops packets when they exceed whatever
bandwidth YOU decided to allocate for that particular connection will
cause the other end's TCP to understand that it has been sending packets
too fast, and slow down.

>>Increasing latency is bad. Not only does it not solve the problem (TCP
>>has very good mechanisms for handling high latency connections without
>>losing bandwidth), but it also makes the user experience seem bad.
>>    
>>
>
>That's my whole point, IMHO that's what you want to do, but be selective
>about it.
>
You missed the part about "not solving the problem". As TCP measures the
connection's latency, if you manually increase the latency, two things
will happen:
1. The applications will attempt to perform larger send-aheads, so that
the allocated bandwidth (which they have no reason to believe is lower
as a result of the latency increased) will be saturated. This means you
will find yourself in a loop of increasing the latency more and more.
2. If a packet DOES get lost, it will result in the TCP taking far too
long to realize that an ACK went missing. It will not realize this until
it expects the ACK to arrive, which is a very long time indeed, thanks
to you. If packets 1, 2, 3 and 4 were sent, and packet 1 was lost, it
will take the sending TCP a long time to realize that, as it expects the
ACK for 1 to arrive very late. In the best case, this will result in
silence where it could have been transmitting. I think the more
realistic case, however, is that it will just go ahead and send 1, 2, 3,
4, 5, 6, 7 and 8 (as it thinks all those packets are really in transit
before an ACK for 1 should be seen). If 1 is really lost, once it
realizes this, it will start retransmiting 1 through 8 all over again.
The receiving end will send an ack for 8 right after it receives 1 again
(as it already has 2 through 8), but as you delay this ACK too, 2, 3, 4,
5, 6, 7 and 8 will get retransmitted, costing you that very same
precious bandwidth you were trying to save.

>Sounds like it could be more easily incoperated in a pptp client instead
>of in the kernel.
>
Well, that's not where Linux's QoS is.

> At that level you still know what each packet is and
>where it is going.
>
But not what connection is belongs to. If you are doing load ballancing
between two tunnel connections, the information may not even be
available to you at all, as another tunnel handled the relevant packets.

> Another trick would be to lower the mtu for "bad"
>packets, but that might be a lot more difficult.
>  
>
Huh? Lowering the MTU causes more overhead. It's fairly easy to achieve
(assuming there are no good and bad packets destined for the same
destination), but I'm afraid the only effect this will have is to
decrease network efficiency. I doubt it will have any effect on network
load.

>>Sure they had reason to resend them. These packets were over the
>>available bandwidth.
>>    
>>
>
>But that decision was made arbitrarily by you, after you had received
>them.
>
Good! That's exactly the effect we are after, right? I want to assign
different bandwidth to different incoming connections.

> You asked for them, they sent them and sent them again.
>  
>
Like I said in my previous reply, they would have resent them anyways.
Maybe not this particular packet, but as TCP uses dropped packets to
measure the connection's bandwidth, it will lose (due to congestion)
approx. the same percentage of the packets, no matter who is the one
dropping the packets due to bandwidh overflow.

>You are confusing tunneled and non tunneled connections.
>
I don't think I am. No.

> A tunneled connection
>always sends the same type of packets to the same destination.
>
Type has nothing to do with it.

>UDP connections will get dropped packets due to buffer full, TCP should 
>not.
>
The only reason for TCP not to get dropped packets is because it senses
the allowed bandwidth for the connection. The way it does that (assuming
no EGN is enabled) is by counting dropped packets. Hence - TCP will also
get dropped packets, just not as many.

> In the end if the packets are dropped, then the receiver will wait
>for them and after a time out, ask for them to be retransmitted.
>
No. The sender will estimate when an ACK for them was supposed to
arrive, and send them of his own accord. A receiver in TCP never knows
when the sender sent something, unless it arrived. There is no NACK in
TCP (selective ACK non withstanding).

>That's the problem, the wait time needs to me a few miliseconds, not a
>TCP timeout. 
>  
>
The TCP timeout is tuned to each connection's latency. I can assure you
that this is NOT the problem.

Shachar

-- 
Shachar Shemesh
Lingnu Open Source Consulting ltd.
Have you backed up today's work? http://www.lingnu.com/backup.html


=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to