On Friday 16 January 2009 23:11, Robert Hailey wrote:
> 
> On Jan 16, 2009, at 4:31 PM, Matthew Toseland wrote:
> 
> > On Friday 16 January 2009 21:18, Robert Hailey wrote:
> >> But in thinking about it... it actually makes it more secure in that
> >> sense, because the remainder of the request would be 'streamed'
> >> through several nodes as usual, so the best an attacker could
> >> determine by a latencyEstimate()~=linkSpeed() is that the requested
> >> node either has it in it's datastore OR has a bandwidth to peers
> >> higher than yours OR has at least one good peer with a clear low-
> >> latency queue.
> >>
> >> What's more, if a node was sufficiently busy-enough to have an
> >> attacker be able to measure it's latency estimate between datastore
> >> and remote requests, then surely it would also be busy enough to have
> >> multiple lowlatency requests therefore confounding that measurement
> >> (making datastore requests look remote, I might add).
> >>
> >> Or if a node was sufficiently idle for an attacker to be able to  
> >> "feel
> >> out" requests to various uris, adding a small random latency (average
> >> link to a peer), would both easily confound that and might buy our
> >> node props for getting requests ahead of schedule.
> >
> > Having to wait for 3 probes to come back is several round-trips.  
> > This is bad.
> > Apart from that, the privacy considerations are serious: one classic  
> > attack
> > is to request a key, and then kill the node that served you it. It's  
> > not
> > viable now because in requesting the key you propagate the content  
> > you are
> > trying to censor. Your proposal removes this property.
> 
> Surely the added latency for 3 round trips at the high/request  
> priority would not be bad, and they wouldn't be full round trips  
> anyway because as the paths converge on the network they would be  
> coalesced and not repeated.

They could be, but if they all run in parallel coalescing is a problem (think 
loops). But I don't think this is going to do good things for routing. Medium 
term we will have our peers' bloom filters and will hopefully just fetch from 
one hop.
> 
> One simple workaround to re-add the given security property you  
> mention would be to translate some number of pre-requests into actual  
> requests (perhaps if the latency is low enough, or just a percentage).  
> Although, I'm not sure I totally understand the attack you mention,  
> because a prerequest coming from a node would only indicate that the  
> data 'could' be fetched through that node...

I don't see why that would help. If we know that a node has the data, or is 
close to the data (which is closely related), we can still kill it. Granted 
this is most powerful on opennet, where with path folding we will often have 
the opportunity to connect to the original data source...
> 
> One extra consideration in the time it takes to get the low-latency  
> request back (as opposed to just the latency value), for datastore'd  
> requests. A security delay would have to be added there to, and would  
> only negligibly effect the overall latency because it would only be  
> seen at the end of the chain (at the datastore).

No. We have considered this. Any delays we add, while they may add some 
uncertainty for a single request, will be known by the attacker, and timing 
attacks are still viable statistically speaking. Really, deniability on what 
is in your datastore doesn't work - it's a myth, timing attacks are just too 
easy. Hence it is essential that we separate the contents of your datastore 
from your actual requests (which we will do in 0.9). However, the property 
that finding where data is necessarily propagates it is very useful from the 
point of view of resisting censorship. Which does open the question of 
whether bloom filters are a good idea ...

Attachment: pgpDrpH5QcIcx.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to