Ken Corson wrote:

Martin Stone Davis wrote:

Michael Wiktowy wrote:

Correct me if I am wrong here but ...

All of those people who have been posting (or expecting) these
wonderfully symmetric single peaked bell-shaped request distribution
curves may be misrating the health of their node by thinking that this
is a good thing and that the lack of such a distribution is a bad thing.


My only question is - why did this happen at all ? I still don't understand
whether specialization should be a single point or multiple points. I've
asked the question before, and toad indicated a single point is the
goal with NGR. No one else had any comments. I suppose that as a node
'grows' into the network, it may be that the node's specialization should
converge about a single point (and range around that point). But no one has
explained it sufficiently. Iakin seems to have a good handle on this, perhaps
he can answer it ?

Well, as I *am* Iakin and Ian and Toad and Zab and also that guy who was spamming devl a few months ago, I'll answer it. No wait, I'm not Iakin. I'll let him answer it.



latest plans for fixing it: iTTL, correcting estimate(), unobtanium, and Ian's backoff.


Glad to see someone is keeping track of the various efforts!

:)



But in principle, yes, lack of such a distribution is not, in and of itself, a bad thing. However, if we somehow knew that the network was filled with an enormous number of nodes and lots of content, then such a lack of sharp specialization would be cause for concern. Specialization is supposed to make it possible to easily retrieve any existant key, even in a large network.


I'll extend this point by saying, if specialization were to be the only
criteria for routing, then the routes chosen should almost always be
the shortest route (fewest hops) to the content (but not necessarily the
fastest). If each node has 3 likely/decent choices for routing a specific
query, then the number of possible routes would be something like 3^20 .
This is weak math but it holds for the purpose of demonstrating the number
of alternate routes that still have a high probability of succeeding.
  3^20 = 3,480,000,000 !

As HTL approaches zero, nodes should take great care to see that they
route these queries to their absolute best (specialized) chosen route.
With higher HTLs, it is less important.

Did you read my post about STL in "DNDoDNF and STL (was Re: DNFs and the laws of physics)" posted about 7 minutes before your post? :) See how STL would accomplish that.


In fact, premix routing says
to take a completely random route for the first several hops. In something
like a DHT, the query should always be getting closer to its target at
every hop. It is not clear that this is a requirement in Freenet.

-Martin



_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to