On Jan 16, 2009, at 4:31 PM, Matthew Toseland wrote:

> On Friday 16 January 2009 21:18, Robert Hailey wrote:
>> But in thinking about it... it actually makes it more secure in that
>> sense, because the remainder of the request would be 'streamed'
>> through several nodes as usual, so the best an attacker could
>> determine by a latencyEstimate()~=linkSpeed() is that the requested
>> node either has it in it's datastore OR has a bandwidth to peers
>> higher than yours OR has at least one good peer with a clear low-
>> latency queue.
>>
>> What's more, if a node was sufficiently busy-enough to have an
>> attacker be able to measure it's latency estimate between datastore
>> and remote requests, then surely it would also be busy enough to have
>> multiple lowlatency requests therefore confounding that measurement
>> (making datastore requests look remote, I might add).
>>
>> Or if a node was sufficiently idle for an attacker to be able to  
>> "feel
>> out" requests to various uris, adding a small random latency (average
>> link to a peer), would both easily confound that and might buy our
>> node props for getting requests ahead of schedule.
>
> Having to wait for 3 probes to come back is several round-trips.  
> This is bad.
> Apart from that, the privacy considerations are serious: one classic  
> attack
> is to request a key, and then kill the node that served you it. It's  
> not
> viable now because in requesting the key you propagate the content  
> you are
> trying to censor. Your proposal removes this property.

Surely the added latency for 3 round trips at the high/request  
priority would not be bad, and they wouldn't be full round trips  
anyway because as the paths converge on the network they would be  
coalesced and not repeated.

One simple workaround to re-add the given security property you  
mention would be to translate some number of pre-requests into actual  
requests (perhaps if the latency is low enough, or just a percentage).  
Although, I'm not sure I totally understand the attack you mention,  
because a prerequest coming from a node would only indicate that the  
data 'could' be fetched through that node...

One extra consideration in the time it takes to get the low-latency  
request back (as opposed to just the latency value), for datastore'd  
requests. A security delay would have to be added there to, and would  
only negligibly effect the overall latency because it would only be  
seen at the end of the chain (at the datastore).

--
Robert Hailey

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to