vive wrote:
> That doesn't follow just because its probabilistic. This depends on
> simulations mostly not using the deterministic component of the
> Kleinberg model: the nearest-neighbor connections, which is unreasonable
> to assume for Freenet. So the greedy routing isn't able to strictly
> approach its target, having to deal with dead ends.

Exactly, even before randomizing the positions there are dead ends.
That's all I was trying to say - even a perfect Kleinberg graph isn't
perfect. ;-)

> Let's not get stuck on definitions. :-) My point still holds: you dont
> get back to the same energy level in a reasonable amount of time. A good
> approximation is what can be hoped for with the swapping algorithm. 

Agreed, we will always be dealing with an approximation. The question
is, if we use the current locations to calculate the connection
probabilities when replacing a node, will the resulting graph be a
better or worse approximation?

Here's a thought experiment: take a Kleinberg network and shuffle the
locations. Then replace the nodes one by one with new nodes, using the
current locations to calculate the connection probabilities. When all
the nodes have been replaced, don't you have a Kleinberg network again?

> You're right, we can use the inversely proportional distribution to add
> the edges. But lets say we have shuffled the positions randomly. Do you
> think this will help us to get a navigable network when we have added
> enough of nodes? I claim this is not the same thing.

I see your point: if we use the shuffled locations it's equivalent to
choosing the neighbours uniformly at random, which will make the network
less navigable using the old locations. But at the same time, we're
adding links that will be navigable using the new locations. Eventually
the old network will be entirely replaced and the old locations will
become irrelevant - the network will become navigable again through
churn, even without swapping.

> To clarify what I mean, I performed some experiments on this in the
> "ideal model", no position randomizing. Edges were selected as in the
> Kleinberg model, evaluation was over 100000 routes with HTL=100.
> 
> 1) Starting with 1000 nodes, average degree=10
>    Result: N=1000,succrate=0.996,avglen=8.261
> 
> 2) Starting with 1000 nodes, average degree=10. Then we add 1000 nodes to
>    random places on the circle (growing it to 2000 positions) with avgdeg=10
>    Result: N=2000,succrate=0.557,avglen=13.455
> 
> 3) Starting with 2000 nodes, average degree=10
>    Result: N=2000,succrate=0.993,avglen=10.103 

Those are interesting results, but a single influx of 1000 nodes is not
the same as continuously replacing nodes. There are disproportionately
many connections between the older nodes, because their connection
probabilites were calculated when the network was smaller. However, if
you replace nodes one at a time so the size of the network stays
constant, the new node's connection probabilities will be identical to
those of the node it replaces, so it seems to me that you won't deviate
from the model, you'll just move from one instantiation of the model to
another.

> The idea is that applying the model when considering links that have
> to be in the /whole/ network is a different thing than joining nodes later on
> and adding these /as/ the Kleinberg model.

But when you construct a Kleinberg network you *don't* consider the
links in the whole network. That's why I keep talking about
independence. It's not like preferential attachment, where the location
of new links depends on the location of existing links; each link is
created independently, so it doesn't matter what order you create them in.

I agree that the Kleinberg model doesn't apply to growing graphs, but
churn isn't the same as growth.

Cheers,
Michael

Reply via email to