fish <fish at bovine.artificial-stupidity.net> writes: > On 21 Nov 2002, Edgar Friendly wrote: > > > I'm still worried about this idea of "propogation" being a dangerous > > one. I understand that requests succeed more often for content that > > has been requested a lot, but there's got to be something we can do to > > make normal insertion effective enough for keys not to need to > > propogate before being requestable by the majority of users. Hmm, > > maybe we could raise MaxHTL... > > this is a tough one to call. One the one hand, I really, really, really > want to agree with you on this one - this is a problem which is biting me > in the ass with the audio streaming stuff. > The problem is that there's not high enough probability that the first person to request a key will retrieve it. The problem is that the insertion path hits one series of computers, and the request path hits another series of computers, and neither is long enough to reach the other. If insertions were allowed infinite HTL, they'd be able to spread to all the nodes (ignoring problems with finding a linear routing path that intersects all the nodes), and if requests were allowed infinite HTL, a single request would be able to check all of the network (again, in theory; in practioce, this is limited by the connectivity of the network).
We could wait to see if the distribution servlet makes routing better by getting better seeds out there for new nodes, but that's probably not going to change much in the long term. (I do think it'll help a lot during periods where freenet experiences extreme growth.) Since we're not going to change routing methods, I don't see anything more likely to increase the chances of a request finding inserted data other than increasing the max HTL allowed (maybe inserters inserting at full HTL, but I assume most inserters do that anyway). > however, just upping the maxhtl everytime the network gets larger is kinda > like lenghening the noose around your neck - it'll help you for now, but > you're still gonna hang :-p. > > - fish > This isn't a solution for the network growing. Even if the network was 10 orders of magnitude bigger than it is now, I'd still only recommend the same amount of increase in HTL. The reason we can increase the HTL is because average hoptime has gone down. HTL is a way to limit the amount of network resources each request takes up. so if each unit of HTL takes up less resources, we can increase the amount of HTL that's allowed. I definitely agree that we need to weigh the increased load on the network against the possible gains of this proposal, and maybe I'm proposing it a bit prematurely, but when async IO comes to fred-land and greatly decreases the amount of work the node is doing (and hopefully speeds up routing a bit more), then upping the maxHTL becomes an even better proposition. Besides, because freenet's architecture makes this kind of growth logarithmic, we only have to increase the maxHTL by a constant every time the network doubles in size. So we'll eventually win the HTL vs. network size race. Thelema -- E-mail: thelema314 at swbell.net Raabu and Piisu GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7 84B7 D8D7 6ECE 3635 2AAB _______________________________________________ devl mailing list devl at freenetproject.org http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
