----- Original Message ----- 
From: "Nick Tarleton" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, August 03, 2003 5:35 PM
Subject: Re: [Tech] freenet not suited for sharing large data


> On Sunday 03 August 2003 11:20 am, Gabriel K wrote:
> > Compare freenet to a p2p file sharing network like DirectConnect. The
main
> > differance is that you don't have to upload the data you want to share
to
> > the network when you use DirectConnect. You just enter the network, and
> > everyone can download you share pretty much at once. Now, DirectConnect
(or
> > kazaa or whatever) is not anonymous, true. But having to upload a share
of
> > 60GB before it becomes available to everyone is not the price a user
wants
> > to pay for anonymity. Furthermore, each node has to have HD space to
store
> > someone elses share in freenet.
> But with a regular network, you have to upload it EVERY TIME somebody else
> downloads it.
> Well, other people with the file can contribute, but your net upload is
still
> greater.
> There is the request-serving in Freenet to eat upload bandwidth, but you
can
> cap that.

I'm not sure I follow you here..
Sure, ONE node has to upload EVERY time a node that doesn't have the file
requests it in a regular network.
And in a anonymous network, the least required is to proxy via one node, so
that means two nodes need to upload that file.
In freenet you have cashing of files.. so what? Still the files need to be
uploaded by some node.
Please explain the differance you see here. How is the net upload greater?
If you have something like bittorrent for instance, it is pretty much
optimal with respect to net upload right?

> > What I'm saying is, that a protocol for sharing data mutually
anonymously
> > should still let a user share it's data without having to upload it
first.
> > Also, a transfer between the source and the reciever should not use too
> > many proxies in between. One might be enough to provide anonymity. And
of
> > course, it should be encrypted so that only the reciever can decrypt it.

> You can achieve this by inserting at HTL 0. It won't make it very
retrievable
> though. One proxy could be compromised, and making it anonymous even if
the
> proxy is compromised, if possible, would require a total rewrite of the
> Freenet protocol.

I have only read how freenet work at a theoretical level, so HTL 0 means
nothing to me, sorry :)
I'm guessing it's some kind of TTL for the request? So your request dies
after 1 hop?
Well, I'm talking about a protocl that can perform a search on all nodes,
but STILL allow for the data transfer to use only one or a few proxies.
As for the number of proxies (considering they could be comromised), it's
simply trading sequrity vs. bandwidth overhead. I'm guessing something like
2 or 3 proxies is optimal for large data.

> > I think most (all?) protocols for sharing data mutually anonymously is
not
> > optimised for large data. They seem to put the same weight on messages
for
> > queries and such, as on the data itself. I say this is wrong. Users want
> > little overhead transfers. I think the weight of control messages
(queries,
> > answers and such) should be low compared to the data itself. So it
should
> > be OK that the overhead of finding the data is large (if it's 500% more
> > than shortest/least hops/bandwidth it's ok), if that optimises the data
> > transfer! Think big! :) The amount of data transfered is so much bigger
> > than the amount of control messages! Optimise data transfers!
> But what about small data like freesites? Latency is so much more
important
> than bandwidth there.

Yes that is true, but if you have used DirectConnect (maybe with a 10Mbit
line), most files downloaded are big. Or at least they are big together,
files from the same source. These files would take the same path, so they
can be seen as a big file.
And I think that everything over 3 MB could be considered big compared to
the size of control messages.

> Large-file bandwidth is actually pretty decent. I've downloaded a 650MB
movie
> in just a few hours using a fresh transient node on a cable modem.

Well, if I would be in a network where practically noone is downloading
except me, and each node has the same BW as me, then sure I could have 20
proxies in between without loosing speed. Latency would be greater, but once
the transfer has started it will be pretty much as fast as downloading
directly from the source.

Now, if each node is trying to max out it's BW, then the amount of proxies
is highly noticable to the BW.
Let's say every transfered file passes though every node. That would pretty
much mean that in average each node has 1/N of his BW left to download
himself, and the rest is used to proxying, assuming a lot of activity in the
net.

I think it's good to assume lots of activity in the network.

/Gabriel

_______________________________________________
Tech mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/tech

Reply via email to