On Sat, May 26, 2001 at 12:27:55PM -0500, Brandon wrote:
>
> I've been thinking on the caching problem recently and it seems to me that
> it is is indeed disadvantageous to cache aggresively because files cached
> on the edge of the network have a decreased probability of being found by
> an arbitrary node. The closer a file is to the epicenter, the greater its
> global probability of discovery.
>
> So rather than caching all the way back the search path, it would be most
> advantagous to cache in an expanding ring around the epicenter. This can
> be done by having a rapidly decaying probability of caching.
I agree entirely, in fact here was part of my reply to that guy's post:
----
Many thanks for your insights, your document is rather interesting. I
tend to agree with your assessment, I think that over-zealous caching is
having a negative effect. One experiment I hope to try is caching
probabilistically (as you suggest), but with a decreasing probability
the further the DataReply gets from the origin of the data. This means
that the node where the request originated, which is extremely unlikely
to be a "specialist" in the data being requested, should therefore be
unlikely to cache the data.
---
Of course, we will need to simulate this behavior before implementing
it, time to dust off Serapis...
Ian.
PGP signature