> Regarding Bittorrent: 
> Bitorrent requires a separate instance and a separate port for every 
> file that you are sharing. When some people will be sharing 
> hundreds of files this is simply not a workable approach. 

Yes, this is a big drawback in bittorrent's current implementation.
It could be fixed, but someone needs to step up to do it.  (a single
bittorrent peer process listening on one port that can manage multiple
file torrents would make it suitable to leave running all the time)

> Bittorent is optimized towards dealing with a small number of large 
> files under very high (initial) demand situations. Our needs are a 
> little different, we are dealing with large number of much smaller 
> files under moderate demand and without the initial spike in 
> demand. 

The suggestion of doing this on a per-ebuild basis for huge things like
openoffice, mozilla, and java is a good one.

> P2P in general: 
> The content of the network needs to be tightly controlled. We are 
> dealing with source code here and the consequences of someone being 
> able to inject malicious code or malicious files into the network 
> would be unthinkable. Therefore ALL files on the network MUST be 
> vetted by a central server, and ONLY files allowed by the central 
> server would be allowed on the network. All files transfered over 
> the network MUST be checked to make sure that they match the 
> checksums of the authorized files. 

This is not a problem for bittorrent, mnet, freenet and several others.
blocks of data are identified (and verified) using their SHA-1 hash in
those systems.  bittorrent is only a p2p content distribution agent, it
doesn't allow storage within the network; its only purpose it help ease
the load on the data hosters.  (ie: using bittorrent won't guarantee
you faster download speeds but it is kind to the bandwidth budgets on
the mirrors)  A rogue bittorrent peer can -not- inject bad data into
your download.  it can only waste your bandwidth.

Bittorrent's 3.2.1 only supports a single central tracker rather than
allowing coordination between multiple redundant trackers.  If multiple
trackers are run for a single file it fragments the p2p network into
sub networks making it overall less effective because two peers with the
same file aren't exposed to the entire audience of peers wanting parts
of that file.

What this means for gentoo to use bittorrent would be that a single
tracker would be needed for the files hosted using torrents and that
a few of the gentoo mirrors should run bittorrent peers on the hosted
files so that there are always some good complete data sources other
than end user systems.

> As far as I know there are no p2p networks that allow this much 
> central control, the trend in p2p networks is away from central 
> control because of liability issues.

not true.  p2p simply means peer to peer.  bittorrent does not have
that goal.  it is meant solely for a practical application: hosting large
content using less centralized bandwidth.  it uses a central tracker to
kick things off for each peer.

You're applying that meaning to p2p when you really mean specific subset
of p2p networks with that goal (mnet, freenet, gnutella, kazaa, et al).

> I am in the process of writing a p2p system specifically for gentoo 
> that will have the necessary controls in place to make it safe for 
> distrubuting source code.  It will be optional and controlled by 
> the FETCHCOMMEND= setting in make.conf
> 
> I also want to create a network that is 100% legal content so that 
> when the RIAA or MPAA goes on a rampage we will be unaffected. 

bittorrent is not a network; its an application.  it doesn't have any
association with the content people use it on.  if someone uses it to
host something they don't have the rights too it is no different than
them posting it on a web server.  bittorrent cannot be held liable for
that any more so than MS-IIS, Apache, ftpd, ftp, mozilla, IE, etc..


--
[EMAIL PROTECTED] mailing list

Reply via email to