On 2010/10/30 (Oct), at 10:59 AM, Matthew Toseland wrote:
I've tested the current head over the weekend and it now appears to be behaving as intended.
Is the simulator under "freenet/node/simulator"? I did not see a simulator for such opennet ideas. Anyway... if you even look at the output of the freenet/model/SmallWorldLinkModel, you can see that it forms the ideal link pattern for a smallworld network. If the rest of the supporting code successfully prefers nodes in this pattern (which is both its intent and now seen to do by experimental observation) there is every reason to believe that it will form a vastly-superior small world network on the live network. As for LRU... my contribution also you to tune the networks clustering coefficient! can you even speculate what LRU does in this respect?
Perhaps a picture will help illustrate what I generally see: ![]() How can you say this follows a smallworld link pattern? If you want a security reason to merge my fix, I'll give you one! I have reason to believe that *many* nodes have peer patterns just like this (just a clump). In this case, all an attack has to do is get *two* opennet connections to your node (one at +epsilon, and one at -epsilon), and he can monitor 99% of all traffic coming from your node. What's more, because the incoming requests are so specialized he can be nearly 100% sure that the traffic originated from your node. On the other hand, if the node gets anywhere close to the target link pattern (blue dots), the most keyspace he could monitor with two connections would be about 33% (and it must be the keyspace far from the target, and he could *not* be sure it was coming from your node). ![]() You will notice that my patch is not strict. There are still several un-preferred opennet peers, and the peers for the "preferred slots" fall some distance from the center (one-plus-sign-per-slot, the cut-off line is about 1/2-way between the dots).
I'm thinking that it is the major problem. With my patch the incoming distribution resembles a steep bell-curve.
No doubt. IIRC, it was around the time of the htl increase and implementation of turtle requests, no?
I'm sorry, but that doesn't make any sense... a high-htl request (even if answered early at hop-4) should register a success/failure based on the incoming htl. My experiment shows an obvious and marked improvement in chk success rate (across-the-board), but this might be expected because it judges which peers to let fill the slots (and therefore hang on longer) based on a measurement of chk success rate. And this is with only one node running the fix!
But what is data persistence? it may simply be not being able to find the data! If backoff is an indication of bad load management, I'd say it has been doing rather well recently!
That is good to know! I'm sure you can understand my eagerness, IMO this may be the last major functional holdout. -- Robert Hailey |
_______________________________________________ Devl mailing list [email protected] http://freenetproject.org/cgi-bin/mailman/listinfo/devl


