Edward J. Huff wrote:
On Fri, 2003-11-21 at 02:57, Tracy R Reed wrote:
On Fri, Nov 21, 2003 at 01:11:39AM -0500, Edward J. Huff spake thusly:
Can anyone tell me why big routing tables won't help routing?

Can you tell us why they will help it? jrand0m was expounding on how
smaller routing tables should help routing earlier today.

Smaller tables are a problem when most of the nodes in them
are backed off.  With a big table, you are more likely to
have overlaps so that you can use a different "server."

I really, REALLY hope that Ian chooses to weigh in on this thread. I actually asked his opinion on this exact issue, when I got to meet him in person a few weeks ago. Rather than repeat his response to me, I'll let him express it himself.

My inputs on the choice of RT-size follow...

a node's specialization changes over time. The larger our table
is, the less observations we get per route, in a given period
of time. Thus it takes longer to "zero in" on what turns out to
be constantly moving targets. Increasing the size of RT towards
infinity produces meaningless estimators.

at the other extreme, discounting the nature and design of
freenet, the minimal size of a routing table is 2. That is
the smallest size where we can still make a routing decision.
As a practical matter, this would probably break NGR, big
time !! I'm only speaking to theory right now.

with perfect and static routing paths, 2^25 would make
33 million nodes reachable. But routes would be oh so
predictable. Stated another way, paths would have little to
no redundancy: this is "badness". The current 50^25 is
a 43 digit number! Which seems like massive overkill to me.
The ultimate size of freenet, in the next century, is not
likely to exceed a few billion (less than 3^25) Okay, so
shoot me, I'm being a little optimistic here :) And I
probably should be using the intended value of 20, as well!

So it looks like a number as small as 5 would provide a
-huge- degree of path redundancy, and full reachability.
There certainly are security issues to be confronted :
on the one hand, increasing the number of peers you interact
with = more opportunities for someone to get some
measurements on you. On the other hand, it means they will
see a smaller percentage of "your" events.

Finally, let me address this fully. The current design has
n outgoing routes (size of RT) and m incoming routes. I
am fascinated with the concept of symmetric routing -
resulting in the symmetric difference, between the set
of inbounds and the set of outbounds, being the empty set.
It's just a fancy way of saying all peers are in both sets.
In this case, the number of outbound routes are (n-1),
as the request arrives via one of those paths. So for n=3,
we would still only have 2^25 paths. I believe it offers
some opportunities to increase the trust level between
nodes, as compared to the m:n disjoint sets approach. I
expect it also increases the risks between a pair of nodes.
It begins to turn the communications between a pair of
peers into a chess match, regarding this trust. I am not
pushing this model, but simply presenting it as an
alternative topology, that may be worth some analysis...

Ken


(current) opinion that say 300 nodes in the RT, 1200 open connections, could help routing a lot: with more choices,

It probably would help in the short run. I also expect that it would hurt routing a lot, in the long run. Unless you had *significantly* bigger bandwidth than most other nodes.

Of course. We have to guess what might work, and try it, hoping
to shake bugs out in the process.

That's what makes it so fun and worthwhile for us !!! Simulation vs. stochastic analysis. But we probably should get a little more formal in laying out an outline for project efforts and a timeline for expectations.


_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to