Toad wrote:

On Thu, Oct 16, 2003 at 07:45:24AM -0700, Martin Stone Davis wrote:

<big snip>
Does that clear things up?


Not really. I assume you are working on the basis that we have fetched
K1-K100, it has been in the store and been deleted by pcaching?

Not exactly. I'll try to be more clear: We have fetched K1-K100 as well as K101-1000 all from node B. Nothing has been deleted by pcacheing. The difference between K1-K100 and K101-K1000 is that:


1) the operator of node A intentionally requested K1-K100 while the rest were requested by other nodes,

2) keys K1-K100 are all content-related (anti-AAIR government) in a way that K101-K1000 are not.

3) Due to the proposed (bad) solutions #1 or #2, node A is now pretending it doesn't have keys K1-K100, leading to higher search times. The keys may have indeed been deleted, but due to the proposed solution, not due to pcaching.


Okay. What exactly were the proposed bad solutions? I don't see why
there would be a problem if either of my endorsed suggestions were
implemented:
a) Pcache as normal for local requests. Have a separate client level
cache, encrypted with per-node-session keys.
OR
b) Implement premix routing. Don't send local requests through local
datastore AT ALL. Have a separate client level cache as above.
The proposed bad solutions were:

SOLUTION #1: that the the node itself would flag any keys that the
operator has requested as "requested only by me". Then, if the node was
asked later for a key that held such a flag, it would pretend that it
didn't have it, search for the key on other nodes, and then reset the
key's flag if it found it on another node.

SOLUTION #2: that keys that we request would somehow be moved into a
separate, encrypted store, and never placed in the main datastore.

In short, each one tries to pretend that it never requested the keys it in fact requested by passing queries for them along to other nodes rather then replying with the data at hand.


Let's make sure we're on the same page. I was in complete agreement with you when you said:

The attack we are talking about in this
thread though, is also pretty hard to secure against - if we download all
the pieces of the splitfile, and cache them in the user-cache, we will
also cache some of them in the datastore, because of pcaching doing its
normal thing. One solution proposed was to not cache content downloaded
by the local user in the local datastore at all, or only if it is
fetched by another node. The problem with that is that if the node we
route to sees us fetching a key and then queries us for that key and
finds we don't have it (through a timing attack - fixing the key
presence timing attacks will be a pig and cost lots of performance IMHO,
so we have to think carefully about it), it KNOWS that the user
requested the key. Well, with pcaching, it doesn't KNOW, but it knows
there is a certain chance that the user requested it, and if it
correlates a large number of pieces from a single large site or
splitfile, it can get a pretty good idea that you requested that site or
splitfile... there may or may not be the possibility that there is a
node with a degenerate routing table routing all its requests to your
node which actually asked for it, that is one possible defence. Anyway,
my suggestion: implement premix routing for all local requests; the
first node you route to knows you but does not know the key; the second
knows the first and knows the key but does not know you, and then routes
the key and returns the data through the chain. Of course we may want
the chain to be longer than that. Anyway, once premix routing is
implemented, we can then not cache locally requested content at all in
the datastore, because it goes a completely different route, and nobody
knows we requested it so nobody can check whether we have it. That is
probably the answer to a lot of these attacks. But the key probing
attack still remains, and it's a question of how much we care about the
ability for a node to effectively ask your node whether it has a given
key in its store - how much of a vulnerability is that?

But I didn't think that your endorsed suggestions a) and b) above addressed this attack. Am I wrong about that? OTOH, My solution #3 (reproduced below) IS specifically designed to thwart that attack.


SOLUTION #3: Instead, say E queries D queries C queries B queries A. Even though B doesn't know about D, he is (somehow) able to prove to A that D is further up in the chain. (Let's leave how that all is possible for further discussion. I'm just trying to work out the basic idea here.)

Now, A can decide whether to REALLY trust B (in which case he checks his
datastore for the key and replies if he has it, or if he doesn't have it, he routes as usual to another node) or NOT trust B (in which case he simply routes to another node). His decision on whether to trust B is based on A's knowledge of B and D. If B and D had routed through each other too many times in the past, then A will suspect B and D of being in cahoots (like B and C of my AAIR hypothetical). However, if A sees that B hasn't had much experience with D, then A will trust B.

-Martin



_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to