On Tue, Jul 23, 2002 at 05:49:26PM +0100, Matthew Toseland wrote: < > > Hmmmm. If we're going to do selective caching (which we haven't yet > done, and which is pointless until we fix the datastore bugs), we > probably should involve the success probability of the key in the > calculation; this is the number of successful requests over the total > number of inbound requests in a given keyspace segment (there are 256 > segments), and is calculated for overload triage now...
Probabilistic selective caching should simply be based on the number of steps since the data was found / source was reset. This avoids all this silly abritrary behavior yet achieves much the same effect. -- Oskar Sandberg [EMAIL PROTECTED] _______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
