>
> Also, the reality may be even worse: Even without attackers, churn in the
> network causes "natural" degeneration; we randomise periodically to try to
> correct this.
>
> Note that we are probably immune to their third attack due to using a
> commit/reveal on swapping. However the first two are real enough and can't
> be beaten by commit/reveal.
>
> We could implement this tomorrow. What exactly do we need to determine
> before doing so? We could validate the implementation, and compare it to
> current swapping, in a many-nodes-one-VM simulator (we can simulate approx
> 100 to 500 "real" nodes on one high end system).
>

That's interesting. What do you consider high end? If it will actually
help, I'll rent some cloud hardware. I'm not sure about supporting 10,000
nodes, but maybe 2 or 3 thousand from me.


>
> The old probe requests were intended to find the "successor" of a
> particular location/node, and often saw huge gaps. However they were often
> so large that I assumed it was a bug in the implementation, or a
> particularly perverse topology glitch combined with overload... if the
> latter, that's a serious problem in itself...
>
> The paper talks about not having peers-of-peers. We do have locations for
> peers-of-peers for FOAF routing purposes. I wonder if this is important...
>
> Average hops is good, but their max htl is nearly 200. This is kind of
> worrying! Fortunately we can afford some failure thanks to key-level
> redundancy...
>
> The new algorithm has impressive performance on big networks - swapping
> doesn't scale, LMC works a lot better with the limited number of swaps.
> Having said that, they only show 10k nodes in this paper - the slides from
> the related talk (
> http://www.icsi.berkeley.edu/icsi/events/2013/03/strufe-dark-social-networking)
>  suggest it works well on larger networks too; clearly swapping is running
> into scalability problems even at 10k nodes, at least with 6000|V| swaps.
>


> >
> > On Thu, May 2, 2013 at 6:42 PM, Matthew Toseland
> > <[email protected]>wrote:
> >
> > > On Tuesday 05 Mar 2013 15:57:58 Matthew Toseland wrote:
> > > > A paper accepted by INFOCOM 2013 claims to:
> > > > 1) Show that our view of Kleinberg is unrealistic because we do not
> have
> > > the sideways lattice links - the closest connection to the target isn't
> > > always closer to the target than we've been so far.
> > > > 2) Prove that we can't have polylog routing, and show that perverse
> > > cases where the best route will take us further away are increasingly
> > > common as networks grow, and
> > > > 3) Propose a new algorithm that is provably polylog.
> > > >
> > >
> http://www.p2p.tu-darmstadt.de/publications/details/?no_cache=1&tx_bibtex_pi1[pub_id]=TUD-CS-2013-0036
> > > > ("A Contribution to Analyzing and Enhancing Darknet Routing", Roos &
> > > Strufe)
> > > >
> > > > Performance simulations show it is similar to ours or better
> depending
> > > on the properties of the network, via a parameter C, "the maximal
> distance
> > > to the closest neighbor over all nodes", i.e. if our closest links are
> > > close enough, our current algorithm should work, but if not, their
> version
> > > is significantly faster.
> > > >
> > > > The new algorithm changes backtracking and keeps a list of marked
> nodes
> > > - all nodes visited AFAICS, or at least nodes where it backtracked.
> > > Obviously this has security implications, but on darknet this is less
> > > important than on opennet, and we really should have a separate tunnel
> > > setup system for the very paranoid (e.g. based on Pisces).
> > > >
> > > > Having said that:
> > > > - All the topologies simulated showed acceptable HTL for a 100,000
> node
> > > network, even using our current simplistic algorithm. Granted that's
> > > somewhat idealized...
> > > > - We can use redundancy to some degree to avoid perverse parts of the
> > > keyspace breaking important functionality. I've suspected that
> "perverse
> > > parts of the keyspace" exist for some time, and that seems to be the
> > > paper's main concern; and it does not solve the problem either, it
> merely
> > > limits it.
> > > > - Right now hardly anyone uses darknet, and Michael Grube still
> hasn't
> > > finished dealing with Pitch Black.
> > > >
> > > > His homepage also has some rather vague ideas under "open theses" on
> > > censorship attacks on opennet via flooding; I suspect this was written
> by
> > > his supervisor who may not have investigated the matter fully. IMHO
> > > flooding opennet is a rather expensive attack, it's probably much
> easier to
> > > MAST your target and take them offline IRL! :)
> > > > http://www.p2p.tu-darmstadt.de/people/stefanie-roos/
> > > >
> > > And they provide a new swapping algorithm too, which is simpler to
> > > implement, only uses local information, is faster and far more attack
> > > resistant:
> > >
> > >
> http://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_P2P/share/publications/Attack_Resistant_Network_Embeddings_for_Darknets.pdf
> > >
> > > Now if only we really had a 10,000 node darknet to run these algorithms
> > > on! Making it easy to build a darknet has to be a high priority.
> > >
> >
>
_______________________________________________
Devl mailing list
[email protected]
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to