On Fri, Sep 12, 2003 at 03:26:29PM +0200, Thomas Leske wrote:
> The intension of probalistic caching is to keep data longer
> available in the network and improve routing. It works by
> giving data source nodes a better clue on the global popularity
> of the data, because the reloads increase its local popularity.
> 
> I suggest an other solution that has much the same effect, but
> the following advantages:
> 
> - Local popularity on the data source nodes
>   comes even closer to global popularity.
> - Data source nodes have a way to lower their load.
>   Bandwidth may be saved. (This item competes with the first one, though.)
> - In case the data source node goes down, it will be more likely that the
>   data is still available. And no references to the vanished node will be
>   propagated.
> 
> It works quite similar to probalistic caching. The nodes always
> cache the data, but behave nearly the same way as if they did not.

I don't see how that could possibly benefit anyone. And it makes
flooding attacks as easy as they were before pcaching i.e. very easy.
> 
> Assume node A gets a data request for data that it has in its cache,
> but is not the data source of. It propagates the request to
> node B (that is usually the data source), but marks the request
> as unimportant. If node A gets back a DNF (or RNF), then it will
> return the data from its cache. Otherwise it will simply
> propagate the data reply (that may declare an other data source).
> 
> Unimportant requests are treated like normal requests with
> these exceptions:
>  - A DNF in the response does not always mean that the data was not found.
>    The last node in the chain might have been to lazy to answer it.
>    DNFs propagate back to the node that first declared the request
>    as unimportant.
>  - If a node gets an unimportant request for a key, that it is the data
>    source of, and the data is available, then it will return either a DNF or
>    the data. It will more likely return the data, if its load is low
>    or the data is in danger of being dropped (i. e. it is not popular
>    compared to the data, that the node is not data source of).
>  - If a node gets an unimportant request for a key, that it is the data
>    source of, and the data is not available, then it will propagate it as
>    a normal request.
>  - If a node gets an unimportant request for a key, that it is not the
>    data source of, and has the data and does not propagate a data reply,
>    then it will reset the data source to itself and return the data.
> 
> One change for normal requests should be made:
>  - If a node gets an normal request, that it is not the data source of,
>    and has the data and could not contact the data source and does not
>    propagate a data reply, then it will reset the data source to itself
>    in the data reply, but not permanently in its data store.
> 
> The probability that a node resets the data source to itself must not
> increase with the number of reloads of the data. The propagation of a data
> reply does not increase the popularity of the data on the node.
> If a node gets a request for a key, that it knows the data source of,
> it must always propagate the request to it (even for htl=1).
> Inserts are treated the same way. Nothing changes for inserts
> that do not cause a collision, though.
> 
> --
>  Thomas Leske
> 
> _______________________________________________
> Devl mailing list
> [EMAIL PROTECTED]
> http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to