On 2014/03/31 (Mar), at 1:50 PM, Matthew Toseland wrote:
> On 31/03/14 17:54, Robert Hailey wrote: >> I've only been skimming this thread, so excuse me if this is a bit off-base. >> >> (1) I have noticed a reference to "not routing inserts to new nodes" (wrt >> MAST). I have recently required such a decision (asto if a node is "new"), >> so I think a core idea of node veterancy is a good idea... and if it does >> not effect the routing of "GET" requests (which lets them earn a veteran >> status), I don't think that modifying the routing for INSERTs would have any >> negative effect. It's basically just saying that INSERTs must go into the >> "known good" network. > Requests and inserts need to go down the same routes if data is to be > findable quickly. Perhaps, but inserts must stick to healthy, long-lived & *widely-reachable* nodes if it will ever be found a non-trivial time later by a node with a different address. I think *that* is more important than inserts being findable in the dungeon reachable from their original path... unless we have changed the expected use case to be personal/one-machine caching store, and not publication or data transfer. :-) >> (3) refusing (or heavily metering?) new connections from any FOAF. If done >> well, this might help MAST, but I see this being primarily a network >> acceleration... because I theorize that (apart from data being inserted into >> transient nodes), the primary cause for request failure would be >> "over-connectedness"... that the current pattern generates a huge number of >> three-node-triangles (A-B-C-A), that kill requests as they approach their >> destination (via HTL exhaustion). > I disagree. That's exactly what we want for routing to work: Lots of > short links, a smaller number of long links. We use the long links while > we are far away, but when we are close to the target we need to be able > to follow short links. Of course if we're getting loops we need to deal > with that. We will still reject loops; the change we made after the > traceback paper was that we don't remember requests after they have > finished locally, so it is possible to traverse the same dead-end pocket > twice. This should only happen on poorly connected darknets, so I doubt > it's a serious problem on the current network. > > Perhaps you think we have too many short links? That's quite an oversimplification. What we have is the "busy nodes" seeking out the best roads, as you just said: > Nodes that do a lot of filesharing tend to have too many long links, ...because they are constantly making requests across the keyspace, but (conversely) the "mostly idle nodes" tend to only relay requests around their own keyspace. Unless my experience is unique, this causes the "mostly idle" nodes to (eventually) not be able to reach keys outside of their local keyspace. Hence the old adage, "run a spider if you ever want to use freenet". > I won't take this seriously unless you can prove it with some sort of data. I presume you mean something beyond "it's slow" or "it only works for popular data"? :-) Surly there's been a lot of changes since my last measurements of this, so I'll see if I can recreate it. > ... hence the proposal not to path fold at high HTL. I don't follow.... a high HTL means the request is close, so... isn't that another way to try and achieve my goal ("don't path fold with your neighbor"), but more fuzzy and a bit arbitrary? ... and only working for one side of the request. Anyway... attacking the nodes that are working does not address any issue, IMO. >> (4) protecting (that is, not replacing) the "nearest veteran node" on either >> side of our location. This is effectively what the simulators do, as a ring >> topology is somewhat ideal. As I see it, this would assure that the network >> has at least one valid and findable route to every part of the address space. > This is the opposite of what you just said. :) We need the short links > precisely so we have a ring IIRC. The ideas are not opposites. I think that you are viewing it from the presumption of a healthy network; that there are no large dungeons or unconnected-but-similarly-addressed network segments. In fact, I think the two ideas work together to form the malignancy... requests can find a hole large enough to die in. Nonetheless, you may be right about my suggestion, such a small tweak to peer handling does not address the problem at-large. -- Robert Hailey _______________________________________________ Devl mailing list Devl@freenetproject.org https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl