Thankfully my protocol doesn't have to interact with other TCPs in any
way - I'm just pinching the TCP flow control mechanism as I figure I'm
not going to come up with a better one.  Since it has no need to be
bit-compatible with TCP, and since I'm coding for a specific app
rather than the general case, I can make some shortcuts such as not
having rwnd (the receiver will always want as much data as the sender
can throw at it), and building sack in rather than having it as an
option.

Your point on defensive coding is well taken, and probably more so in
a p2p resource-sharing use case than in general.  I'm surprised though
that 'blowing slow start wide open' as you describe actually results
in better performance - surely that means that when you reach the
limit of the connection, you'll lose a ton of packets, and take a long
while to recover?  Or does sack mitigate this enough to not be a
problem?

I find it quite interesting that detecting congestion through packet
loss turns out to be a better performing solution than using
heuristics to spot congestion before it occurs.  Simplicity wins, I
guess.

Many thanks to you both.  This is a great list.  :o)

Will


On 04/01/2008, Spencer Dawkins <[EMAIL PROTECTED]> wrote:
> What Wesley said, but just to add something...
>
> In my Humble But Sometimes Correct Opinion, the standards-track guys have
> been tilting toward simplicity for about a decade - and the simplest theory
> is "if the packet got to the far end and the ACK got back here, there's no
> congestion, so what are you waiting for?"
>
> A lot of people have dorked with other theories, especially involving how
> quickly a TCP "ramps up" and how quickly it recovers from packet loss, and
> some of these ideas have ended up in Experimental specifications - I
> especially liked Limited Slowstart
> (ftp://ftp.rfc-editor.org/in-notes/rfc3742.txt) and HighSpeed TCP
> (ftp://ftp.rfc-editor.org/in-notes/rfc3649.txt), for high-speed links
> (especially links that are unlikely to lose packets). But there's a tradeoff
> between implementation complexity and performance (which includes how
> quickly you recover from congestion), and the theory I keep hearing is "but
> you can recover in just a few RTTs in that environment, anyway".
>
> If you have really odd path characteristics - megabit satellite links being
> one example - it's probably worth looking at experiments, but if you're
> working on a terrestrial P2P overlay, you're probably going to do well
> enough using standards-track TCP that you won't be able to justify reading
> the hundreds of master's thesis proposals that have been done in the past 15
> years.
>
> One other minor point, that's quite funny, really. After Stefan Savage et al
> wrote http://www.cs.ucsd.edu/~savage/papers/CCR99.pdf, which basically
> showed how a misbehaving receiver could dork with the sender to achieve
> really high performance at the cost of really BAD behavior when congestion
> is encountered, I started seeing TCPs that tried to detect non-standard
> receiver behavior. So if you expect to interoperate with standard TCPs - and
> I'm remembering Linux being one of the ones that implemented these checks,
> although I could be wrong - you probably don't want to behave oddly, because
> it's just not clear what would happen if you tripped a "misbehaving
> receiver" check.
>
> (if you haven't read Stefan's paper, please do - it's hilarious. I was at
> the conference where he presented tricks like "but if you ACK EVERY BYTE,
> most TCPs increase their congestion windows per ACK, not per ACKed segment,
> so you can basically blow slow-start wide open". the audience applauded, but
> then apparently went home and starting coding defensively!)
>
> I hope this helps, too.
>
> Spencer
>
>
> > On Fri, Jan 04, 2008 at 04:37:54PM +0000, Will Morton wrote:
> >>
> >> I have built the protocol based on TCP Vegas, but after reading those
> >> references I clearly need to update it to use SACK and to ack-clock
> >> except on rto, as you mention.  My implementation of the protocol is
> >> seemingly coming to resemble a microcosm of TCP development. :o)
> >>
> >> One further question, which RFCs 2018/2581 don't seem to address; do
> >> modern TCPs still use a Vegas-like system for detecting congestion in
> >> advance of packet loss, i.e. by measuring difference between expected
> >> and actual throughput and adjusting the window accordingly?  If not,
> >> what other mechanism[s] do they use?
> >
> >
> > A couple years ago the IETF's TCP Maintenance and Minor Extensions working
> > group put together an RFC that lists and describes most of the TCP RFCs
> > that
> > have been written and what their current implementation and recommendation
> > status is: http://www.ietf.org/rfc/rfc4614.txt
> > You might find that helpful.  Note that Vegas is not among the recommended
> > behaviors, and not written up in an RFC.
> >
> > *Standardized* modern TCP congestion control does not include any
> > delay-based
> > component.  Vista's CTCP does have a delay-based component and Linux
> > includes
> > some algorithms with delay-based components, but these are all
> > experimental,
> > and not even passed through the IETF TCPM group yet.  See:
> > http://www.ietf.org/internet-drafts/draft-sridharan-tcpm-ctcp-01.txt
> > http://www.ietf.org/internet-drafts/draft-rhee-tcpm-cubic-00.txt
> > for example specifications of "modern" *experimental* TCP congestion
> > controllers that have delay-based components that detect and react
> > to changes in RTT as congestion signals.
> > _______________________________________________
> > p2p-hackers mailing list
> > [email protected]
> > http://lists.zooko.com/mailman/listinfo/p2p-hackers
> >
>
>
> _______________________________________________
> p2p-hackers mailing list
> [email protected]
> http://lists.zooko.com/mailman/listinfo/p2p-hackers
>
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to