On Friday 07 June 2002 16:35, David wrote: > http://freenetproject.org/cgi-bin/twiki/view/Pub/ProposalPrefilterRequestsU >n derOverload > > > Thoughts? > > > > --gj > > Well, maybe not informed responses, but I have a few > > questions. You write: > > Freenet routing works because every hop should on average > > bring the request to a node where it has a higher probability > > of being answered than the node where it came from." > > > > When the network is overloaded or poorly adapted the success > > probability doesn't increase with htl. In this case path > > lengths get excessively long and the nodes in the network > > spend much of their effort routing requests that have little > > chance of success." > > I'm sorry if I've missed posts specific to this, but is this > definately true and an identified characteristic of the way > freenet routes? Or is this a hypothesis on the way the routing > works under heavy loads? I guess my language is a little loose here.
There is not guarantee that every single hop must bring a request to a node that has a higher probability of answering the request than the one it came from. But statistically speaking more often than not this will be the case, at least when the network is well adapted and not overloaded. The problem is that under heavy load requests can't be routed to the nodes where the pure freenet routing wants to send them (because the "right" nodes refuse requests because they are too busy). The goal of my proposed change is to get overloaded freenet to act more like "pure freenet" by increasing the chances that they will answer the requests which are closest to their specialization. > > Basically, the idea as I understood it is sound - but I do have > one other question. Your first statement was that freenet routing > works "because every hop should on average bring the request to > a node where it has a higher probability of being answered than > the node where it came from". If this is the default behavior of > freenet, why is it necessary to pass extra metadata related to the > probability of a request's success to other nodes? If the first > statement is accurate, wouldn't that then be redundant data? This is the way freenet should work, but it doesn't work this way when it gets overloaded. p.s. Metadata has a specific meaning in the context of Freenet. Use a different word. http://freenetproject.org/cgi-bin/twiki/view/Pub/MetadataSpec > > I'm also concerned a bit about the possibilities for rogue nodes. > I get the feeling that the more complex metadata that nodes shuffle > around between one another, the more opportunities there are for > rogue nodes to do undesireable things. For example, what would be > the result of a rogue node always returning a Ps(k)=1, regardless If the node receiving the request wasn't overloaded then it would answer the request (but it would have done that anway). If it is overloaded then it will QR the request unless it has the data, i.e. it's Ps(k) == 1, because it will check that the requests Ps(k) is lower than its own before handling the request. The beauty of this proposal is that you extort a little information about the network from the requesting node in return for answering its request. > > of the request? The issue of rogue nodes was only addressed with: > > The impact of evil nodes returning bad stats could be minimized > > by requiring several lower QR Ps(k) values before abandoning the > > search > The requesting node comes up with a list of node references in it's routing table, ordered by their ability to answer requests for keys similar to the search key. It tries routing the request to each of these nodes one at a time starting with the best and continuing down the list until it finds the data or uses up all the hops in the HTL. What I proposed in 3. was to use the estimated success probability to have the requesting node prematurely stop trying nodereferences once the Ps(k) returned by the QR messages is lower than the nodes own Ps(k). The attack I was talking about would be for a hostile node to always QR with Ps(k) == 0, to try to get the requesting node to stop trying candidate nodes prematurely. The solution I proposed was to require several lower QR Ps(k) values before stopping, so that routing to one evil node won't stop the request. Parts 1, 2 are pretty reasonable. I included 3. to see what people think, but I am not convinced that it would work yet. I also haven't completely though the zero trust / anonimity implication. -- gj > ....which I must admit I don't understand. > > > > > > > _______________________________________________ > devl mailing list > [EMAIL PROTECTED] > http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl -- Freesite (0.4) freenet:SSK@npfV5XQijFkF6sXZvuO0o~kG4wEPAgM/homepage// _______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
