* Jusa Saari <jargonautti at hotmail.com> [2007-06-07 23:23:48]:

> On Wed, 06 Jun 2007 19:11:27 +0100, Matthew Toseland wrote:
> 
> > Recent probe data suggests a theory:
> > 
> > Parts of the network are "rabbit holes" or "dungeons", i.e. sub-networks
> > which are only weakly connected to the larger network. These cover a small
> > chunk of the keyspace, say 0.36-0.41 (roughly, in the trace I had). A
> > request for 0.5 got stuck down the rabbit hole. The gateway node at 0.436
> > was backed off; if the request had been able to reach that node, it would
> > have been able to get much closer to where it should be. So the request
> > bounced around in the dungeon, and eventually DNFed.
> > 
> > What we need is some backtracking. At the moment backtracking only occurs
> > when we actually run out of nodes in a pocket i.e. when it is really
> > small. We track the best location seen so far on the request (not
> > including dead ends i.e. RNFs), and whenever this improves we reset the
> > HTL back to the maximum (10); otherwise it is decremented.
> 
> No, what you need is an opennet. Having a "sparse" network with plenty of
> "leaves" only connected to the rest of a network through a single node is
> a natural feature of a darknet. The leaves are made of people who
> happened to be in #freenet-refs at the same time, exchanged refs, and
> left; the connecting nodes are those who are running automated ref
> exchange scripts and therefore get connected to the new leaves as they
> are formed. Simply backing away from the leaves is going to overload the
> single connecting node since all the traffic to and from the leaves is
> going to go through it; and of course if that node happens to go offline
> for any reason the network will splinter.
> 
> A darknet is never going to work well; the network topology forces the
> majority of request through a small amount of well-connected nodes. They
> are going to get overloaded, and even if they won't, the network will be
> splintered if they are taken offline, which is easy since they have to be
> semi-public to get well connected in the first place. This makes the
> "dark" Freenet easy to take down.
> 
> So, the darknet is fragile, won't ever perform well, and is a pain in the
> ass to get into and stay in. No amount of tweaking can get over these
> fundamental inherent flaws in a darknet approach. Only an
> enabled-by-default opennet can solve them.
> 
> Oh well, it's not my problem, since I'm not running Freenet anymore; the
> hassle of having to get new refs from #freenet-refs every few days as the
> old ones go offline was simply too much. I guess I'll be checking back
> every couple of months to see if the sanity of openness has returned or if
> the Freenet is still in the dark.
> 

You miss the point here. No one has ever argued that a darknet built
like the current one can work... The routing algorithm is based on small
worlds properties, if the darknet isn't a small world it won't work;
full stop.

The darknet approach could work if people were really playing the game
and building it like a social network because such networks are small
worlds... The problem is that the network has to reach a critical mass
beforehand so that connecting to real life friends becomes possible.

Implementing a workaround (opennet, backtracking, ...) is only a way of
fixing temporarily the topology to the expense of both liberty (it has
to be the default behaviour as you pointed out) and safety (everyone
knows that the opennet approach has design caveats).

"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety"
-- Benjamin Franklin

NextGen$
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20070607/59930800/attachment.pgp>

Reply via email to