I'm concerned that adding node latency/throughput to the routing algorithm
will make it less successful.  The whole point of the algorithm is to
take me to the node that gives me the best chance of finding the data.
That's why we think that we can have a million nodes and still find
items even with HTL of 5 or so.

Suppose I'm at a college with 100 nodes out of a million in the world,
and all of those 100 nodes have much better communication performance
with each other, compared with any node outside the college network.
That means that all 5 of my hops are likely to go to college nodes, if
we have an algorithm that gives preference in this way.  If my data's
not at the college, I won't find it.

You need an algorithm which will (A) find the data in the college
network if it's there (so as to avoid load on gateways), (B) find the
data anyway out in the world if it's not in the local network, and (C)
not require HTL to be much more than it is now.

One possibility would be to use a strict key-closeness criterion for
small HTL (the same algorithm we have now), while weighting performance
issues more highly for larger HTL.  For example, you could start with
a somewhat larger HTL, maybe 7 or so, and weight the algorithm so the
first 2 hops would be within the college network, and if they fail do
the remaining 5 hops out into the world.

Hal

_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to