i dunno .. maybe the killer feature is that web browsers are really bad when
you use them for downloading files of any kind thats above a few KBs,
especially if you are on a low bandwidth connection like a dialup line or
you are in india... or on a high corruption connection like wireless where
the tcp connection starts behaving wierd...

Having  a separate manner of dealing with large files, without a user having
to worry too much about putting that file in the proper place in a web
server directory, makes it all the more worthwhile??

On 1/5/07, Adam Fisk <[EMAIL PROTECTED]> wrote:

That make sense, Serguei, particularly in terms of the effect of the
algorithm (not sure that was its original intent!).  The nebulous purpose of
it is what really gets me from the protocol design perspective.  If it were
some really killer feature that, say, tripled throughput, it might justify
creating a new protocol, but it's just not.

-Adam


On 1/5/07, Serguei Osokine <[EMAIL PROTECTED]> wrote:
>
> On Friday, January 05, 2007 Adam Fisk wrote:
> > Tit-for-tat is basically providing incentive to keep your client
> > running.
>
>         I would argue that it is actually a way to loosely group the
> similar-bandwidth hosts together in the cloud. Not entirely sure why
> - maybe this simplifies the selection of peers, allowing you to use
> what amounts to a constant constant number of them instead of having
> to grab more and more when their cumulative uplink bandwidth happens
> to be inadequate.
>
>         In any case, if you are downloading, your client is already
> running, isn't it? Unless you're talking about "keep the honest
> client running", as opposed to the hypothetical cheating leecher
> client, which is really made pretty impractical by tit-for-tat.
> (Not sure how serious this problem was to beging with, but that
> is a long discussion.)
>
>         Best wishes -
>         S.Osokine.
>         5 Jan 2007.
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] ]On Behalf Of Adam Fisk
> Sent: Friday, January 05, 2007 12:47 PM
> To: theory and practice of decentralized computer networks
> Subject: Re: [p2p-hackers] HTTP design flawed due to lack of
> understandingofTCP
>
>
> Hi Peter-  I've come to this view after closer work over the last
> several
> years with the IETF and more intimate experience with protocol design in
> implementing various IETF protocols, particularly within the SIP
> family.  My
> initial forays into protocol design came from working on many different
> protocols on Gnutella, and some of the Gnutella protocols suffer from
> the
> same problems as BitTorrent.
>
> In a nutshell, well-architected protocols are designed to do very
> specific
> things well.  This allows each protocol to evolve independently, with
> each
> protocol yielding control to others in the stack at the appropriate
> levels of
> abstraction.  In SIP, this approach is readily apparent and strikingly
> effective, with SIP exclusively establishing sessions, leaving the
> Session
> Description Protocol (SDP) to describe the session, the MIME
> specifications
> within SDP to describe the type of media the session will handle, and
> with
> STUN and ICE handling thorny NAT traversal issues.  Each protocol is
> independent of the others, with these discrete building blocks leading
> to
> incredible flexibility as the protocols evolve.  It also allows discrete
> open
> source projects to be extremely focused in the protocols they implement.
>
> One key to these principals is to re-use protocols effectively.  With
> everyone in the world implementing and understanding MIME, SDP can
> interoperate much more easily if it also uses MIME.  For file transfers,
> HTTP
> is the universal standard for lots of good reasons.  BitTorrent uses
> effectively a proprietary file transfer protocol, thereby breaking
> interoperability with the rest of the Internet.  While BitTorrent is
> "open"
> in the sense that anyone can implement it, it's almost worse than being
> a
> closed protocol because it doesn't fit in with any of the very
> well-designed
> other protocols out there.  It would never have a chance to interoperate
> with, say, SIP or XMPP because it just implements everything as it damn
> well
> pleases.
>
> I say the features of BitTorrent don't come anywhere near justifying
> this
> because the primary reason for breaking HTTP is tit-for-tat support.
> Tit-for-tat is basically providing incentive to keep your client
> running.
> That's more or less fine, but that piece should not be coupled to file
> transfers.  At a protocol design level, that's just insanity.  It also
> comes
> at a tremendous cost.  Every web server on the planet is now an invalid
> source for a file!  Excluding the most powerful computers on the
> Internet
> from the distribution system doesn't seem like a sound design decision,
> particularly for the poorly conceived tit-for-tat justification
> described
> above.
>
> I actually have lots of other issues with BitTorrent, but the protocol
> layering issue might be the biggest.
>
> -Adam
>
>
>
>
> On 1/5/07, Peter K Chan <[EMAIL PROTECTED]> wrote:
> Adam,
>         Your assessment of BitTorrent caught my attention.
>
>         How is BitTorrent "breaking interoperability with the rest of
> the
> Internet?" Why is it that the unique features of BT "don't come anywhere
> near
> justifying" it?
>
> Peter
> _______________________________________________
> p2p-hackers mailing list
> [email protected]
> http://lists.zooko.com/mailman/listinfo/p2p-hackers
>


_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers



_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to