Lucas Nussbaum wrote:

> I need to transfer one large file (> 10 GB) over a fast network
> (gigabit), to a set of 40-50 nodes. This transfer is part of a parallel
> installation process, so it happens "almost" at the same time on all the
> nodes, causing a massive bottleneck.
> 
> I currently use rsync to transfer that file, which doesn't scale well,
> obviously. So I'm looking for something better, P2P-based.
> 
> My main problem with BitTorrent is that I don't want to leave a tracker
> and a seeder running on the server. Ideally, the first client would
> start the server part. And the server part would exit automatically
> after being idle for, say, 5 minutes. On the client side, the client
> would continue to seed until a timeout happens (like 2 minutes) or
> there's no client to seed to anymore.
> 
> Ideally, the interface would be as simple as rsync's, that is:
>    client$ rsync server:file destfile
> 
> Does someone know something I could use to do this?
> 
> Or is there a simple, high performance BitTorrent client or library I
> could use, and script around it ?

I second the suggestion of multicast, but if that doesn't work you 
should probably start with libtorrent. You might be able to use PEX or 
DHT instead of a tracker, and the other changes should be pretty easy.

Wes Felter - [EMAIL PROTECTED]
_______________________________________________
p2p-hackers mailing list
[email protected]
http://lists.zooko.com/mailman/listinfo/p2p-hackers

Reply via email to