On Fri, Nov 21, 2003 at 03:20:58PM -0500, Ed Tomlinson wrote: > On November 21, 2003 01:45 pm, Toad wrote: > > On Fri, Nov 21, 2003 at 12:01:27PM -0500, Ed Tomlinson wrote: > > > On November 21, 2003 06:41 am, Ian Clarke wrote: > > > > It seems that we aren't seeing the hoped-for specialization in NGR. > > > > Rather than futzing about with all sorts of ideas while trying to test > > > > them in the chaotic real network, we need a simple simulator to help us > > > > test these ideas. > > > > > > One idea to test on that simulator is are we specializing by key or > > > for fast connections. It seems to me that, by far, the largest chunks > > > of the NG estimator are from transmission time... When I did the > > > first stabs at NG I was only basing it on search times and, if > > > memory serves, it was getting much better numbers. > > > > > > Maybe we should be normallizing to a size that does not give > > > a number much greater than transmission time? > > > > Why? > > The search time is usually a number averaging around 15,000 ms > The time to recieve a normalized file is over 150,000 ms. This > means the 15000 (or 2000 or 30000) gets lost in the noise and NGR > just looks for the fastest node.
Those are not the only factors involved. The transfer time is multiplied by the probability of a transfer occurring. > > > Don't you think we get more accurate numbers by taking into account file > > size? This allows NGR to balance specialization, transfer speed, and > > search time for a given request... > > The number are more accurate but do not take us towards our goal.. How so? > > Ed -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
signature.asc
Description: Digital signature
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
