On Thursday 07 June 2007 21:23, Jusa Saari wrote:
> On Wed, 06 Jun 2007 19:11:27 +0100, Matthew Toseland wrote:
> > Recent probe data suggests a theory:
> >
> > Parts of the network are "rabbit holes" or "dungeons", i.e. sub-networks
> > which are only weakly connected to the larger network. These cover a
> > small chunk of the keyspace, say 0.36-0.41 (roughly, in the trace I had).
> > A request for 0.5 got stuck down the rabbit hole. The gateway node at
> > 0.436 was backed off; if the request had been able to reach that node, it
> > would have been able to get much closer to where it should be. So the
> > request bounced around in the dungeon, and eventually DNFed.
> >
> > What we need is some backtracking. At the moment backtracking only occurs
> > when we actually run out of nodes in a pocket i.e. when it is really
> > small. We track the best location seen so far on the request (not
> > including dead ends i.e. RNFs), and whenever this improves we reset the
> > HTL back to the maximum (10); otherwise it is decremented.
>
> No, what you need is an opennet. Having a "sparse" network with plenty of
> "leaves" only connected to the rest of a network through a single node is
> a natural feature of a darknet. The leaves are made of people who
> happened to be in #freenet-refs at the same time, exchanged refs, and
> left; the connecting nodes are those who are running automated ref
> exchange scripts and therefore get connected to the new leaves as they
> are formed. Simply backing away from the leaves is going to overload the
> single connecting node since all the traffic to and from the leaves is
> going to go through it; and of course if that node happens to go offline
> for any reason the network will splinter.
>
> A darknet is never going to work well; the network topology forces the
> majority of request through a small amount of well-connected nodes. They
> are going to get overloaded, and even if they won't, the network will be
> splintered if they are taken offline, which is easy since they have to be
> semi-public to get well connected in the first place. This makes the
> "dark" Freenet easy to take down.

We are not talking about darknet vs opennet. If you want an opennet go use Tor 
hidden services, or I2P, or any of the numerous opennet p2p's out there. In 
the long term, if there is no darknet, THERE IS NO FREENET. It doesn't matter 
if we have an opennet that outperforms BitTorrent over Tor (which 
incidentally isn't that hard!): It will be blocked using the systems that are 
already in place in most western countries, let alone China et al.

Now, I might concede that many of the problems of the present network are 
caused by the pseudo-opennet growth of the present "darknet". But we have 
excellent reasons to believe that if it were in fact a true darknet, it would 
be a small world network and therefore navigable.

That's not to say that we shouldn't implement opennet. Opennet is a great way 
to get people onto the network. But it is not a long term solution. And it 
will very likely suffer from all sorts of serious problems, just as 0.5 did. 
Backtracking will help on a true darknet, because it is likely to be an 
imperfect network, with sub-networks constantly joining up. It will help on a 
hybrid network, because not all of the darknet sub-networks will be strongly 
connected to the opennet. And it will probably help even on a pure opennet, 
because real world stresses mean that the topology on a pure opennet will not 
be perfect.

It is much easier to debug many of these routing and load management problems 
on a darknet, even a pseudo-opennet darknet, because we have much slower 
connection churn than on an opennet. Once opennet is implemented, while some 
problems will be reduced, many other problems will spring up in their place. 
Problems which we never managed to adequately solve on the 0.5 opennet. For 
example, you can get onto (and up to speed on) the 0.7 pseudo-darknet a good 
deal faster than onto the 0.5 opennet. But we will implement opennet - after 
we have at least tried to sort out some of the current problems, and 
demonstrated experimentally and observationally that the best way forward is 
opennet, that we've fixed all the easy stuff.

At least, that's the plan as I understand it.
>
> So, the darknet is fragile, won't ever perform well, and is a pain in the
> ass to get into and stay in. No amount of tweaking can get over these
> fundamental inherent flaws in a darknet approach. Only an
> enabled-by-default opennet can solve them.

Opennet will never be enabled by default. We will ask the user whether they 
want opennet.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20070608/bd4c22a9/attachment.pgp>

Reply via email to