On Tue, 16 Sep 2003, Toad wrote:

> Probabilistic caching does not directly depend on specialization, but it
> is intended to amplify it. If the datastore is full, we need to decide
> what to keep, because we will have to delete *something*. Probabilistic
> caching is based on the number of hops since the DataSource (which is
> randomly reset, but is nominally the node from whose store the data came
> from), so that the data is cached more closer to the source.

So, the probability that something will be cached is not calculated with 
respect to the node's specialization, correct?  The node doesn't take 
stock of what keys it already has?

So, the amplification of specialization is something that should occur 
only incidentally, i.e. if a node can find keys in a certain area of 
keyspace well, it will route requests for them more often, and thus will 
have a higher likelihood of caching the data eventually, thus becoming 
even better at routing requests for it, and so on and around.

-todd
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to