On Tuesday 06 January 2009 14:02, Michael Rogers wrote:
> Matthew Toseland wrote:
> > Only because they are over-reliant on both ubernodes and queueing, and are 
> > rubbish for anything that isn't reasonably popular (e.g. look at 
bittorrent - 
> > for less popular files it is usually hard to find a seed). They are 
therefore 
> > more able to deal with low uptime nodes.
> 
> BitTorrent and Gnutella are currently more useful for rare content than
> Freenet, because they have more users. To attract more users you have to
> avoid pissing them off, and I believe a good way to do that is to behave
> like a normal app even though you'd prefer to run 24/7.

I'll defer to nextgens' greater experience here, see his mail.
> 
> > For Freenet, low uptime is a big deal. Churn is a big deal.
> 
> I understand that, but retaining users is also a big deal. If you manage
> to attract a million users and 1% of them form a reliable, stable
> network, with the rest essentially acting as clients, then you're better
> off than you would be with a thousand users all of whom run stable
> nodes. Apart from anything else you'll have more content.

No you're not. You just end up with a million clients and 10,000 overloaded 
nodes trying to serve all the clients very slowly. Much as happens with Tor.
> 
> That kind of user behaviour might not suit Freenet's design particularly
> well, but unfortunately that's the way P2P users behave. You can either
> design for reality or keep trying to force the users to behave
> differently, which will drive a lot of them away.

Who is forcing them? If they want to use Freenet, it will run when the 
computer is online, except when they shut it down. Nobody is forcing them to 
run Freenet.

Designing for reality is more of a long-term goal. I agree 100% that we should 
make the network more tolerant of low uptimes. I have proposed various means 
to do this. But it's a huge problem. It will not be solved quickly.

To make a low-uptime darknet work reasonably well, IMHO we would need passive 
requests and long-term requests and probably Bloom filters and maybe 
rendezvous tunnels (to enable bloom filters to be used more widely). This is 
a lot of work and complexity and everyone especially vive agrees it's not 
something we want to do immediately. There have not even been any basic 
simulations of passive requests yet, and IMHO they may have either a positive 
or a negative effect on routing, dependant on various parameters, especially 
given skewed request distributions (vive's results show that freenet does 
better with skewed distributions): On the one hand, any form of passive or 
persistent request will make finding popular data easier and lower latency by 
building a web of subscribers; on the other hand, much of the purpose of 
passive requests is to cut polling overhead; right now if you request the 
same data repeatedly eventually you will find it even if it's not where it's 
supposed to be, whereas passive requests would try to quench that and route 
only to where the data should be, dynamically updating that as the network 
changes.

But right now, low uptime is BAD. If we tell users to keep their nodes running 
as much as possible, with a service, all they have to do is not switch their 
computer off. They don't even need to log in, all they have to do is push the 
power button. Arguably the prevalance of laptops means that the case where we 
have multiple nodes running under different users will be relatively rare, 
but putting our hope in individual even-lower-uptime laptops is hardly 
solving the problem.
> 
> > Having potentially 
> > a different set of darknet peers for every user on the same computer is 
> > insane (as well as broadcasting to the world who is logged in).
> 
> Why is it insane for two users of the same computer to have different
> sets of friends? 

Performance? The fact that they probably would like to peer with each other, 
but can't, short of us doing a lot more work, and that there is a massive 
amount of unnecessary duplication? The fact that they would have to have two 
completely separate installs - we could not put Freenet into Program Files, 
because we need each user to have their own jars and separate updating in 
order to avoid privelidge escalation caused by world-writable executables.

> I see your point about broadcasting who's logged in, 
> but that's an unavoidable aspect of darknets-over-public-networks: they
> reveal (a subset of) the social network to eavesdroppers.
> 
> > - Data reachability: The node with the data we want may simply be offline 
when 
> > we are online, and then we'll never be able to find the data, at least not 
> > unless somebody else requests it while we are offline, and moves it close 
> > enough to us that our next request works.
> 
> The BitTorrent/Gnutella solution to this problem is massive replication:
> it doesn't matter if there are 1,000 offline users with the data you
> want as long as there are a few online users. The best way to ensure
> that is to attract a lot of users.

Freenet also uses massive replication. Freenet is a cache. That's the big 
difference with BitTorrent. An unpopular file on BitTorrent will have zero 
seeds. An unpopular file on Freenet will probably be findable, even if the 
node has to rerequest a few (thousand!) times. Freenet caches stuff even if 
nobody is actively inserting it.
> 
> > - Download times: Because Freenet is relatively high overhead, routing 
> > requests for many hops rather than contacting the source directly, and 
> > because it is designed for security and tends to avoid bursts and 
ubernodes, 
> > transfer rates are relatively low, and the proportion of incoming 
bandwidth 
> > that is used to satisfy local requests is also relatively low (less than 
> > 100%). This means that to fetch a big file (for example an ISO) can take 
> > days, even if it isn't exceptionally unpopular. If the node is only online 
a 
> > small fraction of the time, days become weeks...
> 
> Fortunately P2P users mysteriously become capable of keeping their nodes
> online 24/7 when they have a download running. ;-)

IMHO most users on a mature network will always have a download running. But 
they will still switch off their computers, especially if they are laptops. 
And on multi-user systems, they will switch to other users.
> 
> Cheers,
> Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20090106/51c4bf8f/attachment.pgp>

Reply via email to