On 30/11/05, Gordan Bobic <gordan at bobich.net> wrote:
> Matthew Toseland wrote:
> > Umm, please read the presentation on 0.7. Specializations are simply
> > fixed numbers in 0.7.  The problem with probabilistic caching according
> > to specialization is that we need to deal with both very small networks
> > and very large networks.  How do we sort this out?
>
> It's quite simple - on smaller networks, the specialisation of the node
> will be wider. You use a mean and standard deviation of the current
> store distribution. If the standard deviation is large, you make it more
> likely to cache things further away.

You are proposing a fix to a problem before we have even determined
whether a problem exists.  I am not currently aware of any evidence
that simple LRU provides inadequate specialization, or that we need to
enforce specialization in this way.

In other words: If its not broken, don't fix it (words every software
engineer should live by).

Ian.

Reply via email to