let me get this clear, you're proposing to have nodes pretend they don't have data? This seems to have the downside of non-probabilistic caching (ending up throwing out more data from the network)
The downside of non-probabilistic caching was not that the nodes always put the data into their cache. This just caused data to be dropped that was equally unimportant. Pcaching will drop the data, even if there are still gigs of free space left in the data store. The problem was that the caching worked too well. Data may not seem to be popular from a single nodes view, because the requests were distributed amoung too many nodes. Because of that nodes dropped the wrong data items.
as well as the downside of probabilistic caching (successful requests take longer to find data).
Most of the time they will only take one hop more.
It does have the upside that unlike probabilistic caching, it won't make requests fail even when their path goes over a node that has seen the data recently, but I'm still not seeing the worth of this proposal.
One can not decide beforehand, if caching data is good or bad. If the node that you route to is overloaded or it considers the data to be popular anyways, then dropping the data was a bad decision. On the other hand if our node caches the data, this might cause the data to be dropped on the node that is in the "shadow" of ours. This is bad, if the network prefers routing to that node.
The proposal postpones the decision. The node always caches the data and propagates new requests for the data as unimportant. The response tells it, if caching was a good decision. If not, the node behaves as if it had not cached the data.
-- Thomas Leske
_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
