On Jan 17, 2009, at 9:19 AM, Matthew Toseland wrote:
On Friday 16 January 2009 23:11, Robert Hailey wrote:
On Jan 16, 2009, at 4:31 PM, Matthew Toseland wrote:
Surely the added latency for 3 round trips at the high/request
priority would not be bad, and they wouldn't be full round trips
anyway because as the paths converge on the network they would be
coalesced and not repeated.
They could be, but if they all run in parallel coalescing is a
problem (think
loops). But I don't think this is going to do good things for
routing. Medium
term we will have our peers' bloom filters and will hopefully just
fetch from
one hop.
There would have to be logic to handle pre-requests coming from a
different node for the same request-id, but this almost exactly
mirrors the chkhandler and recently-failed-table. If all the requests
have the same request id, the handling of parallel requests is very
straight forward. To me the more interesting question is 'who-do-we-
ask' (the three closet nodes? two close, one far?).
With only the current transfer mechanisms, getting the data in one hop
does not help latency (it is transferred just like the other requests
from bunches of other nodes). It may be over the slowest link. At best
it reduces latency to the average chk transfer time, no?
One simple workaround to re-add the given security property you
mention would be to translate some number of pre-requests into actual
requests (perhaps if the latency is low enough, or just a
percentage).
Although, I'm not sure I totally understand the attack you mention,
because a prerequest coming from a node would only indicate that the
data 'could' be fetched through that node...
I don't see why that would help. If we know that a node has the
data, or is
close to the data (which is closely related), we can still kill it.
Granted
this is most powerful on opennet, where with path folding we will
often have
the opportunity to connect to the original data source...
You're right, it does not help nearly as much as the present system;
but (if for some reason we wanted to) we could even have a extremely-
low-priority-queue which will request all pre-requests we have ever
received (one-at-a-time). I only meant to demonstrate that the
mechanism can be conserved, and at the cost of throughput if we desire
(instead of latency).
One extra consideration in the time it takes to get the low-latency
request back (as opposed to just the latency value), for datastore'd
requests. A security delay would have to be added there to, and would
only negligibly effect the overall latency because it would only be
seen at the end of the chain (at the datastore).
No. We have considered this. Any delays we add, while they may add
some
uncertainty for a single request, will be known by the attacker, and
timing
attacks are still viable statistically speaking. Really, deniability
on what
is in your datastore doesn't work - it's a myth, timing attacks are
just too
easy.
I disagree (but have not participated in, nor read, those
discussions); the purpose would be to simulate even one link that the
attack does not know about. Unless the attacker is powerful enough to
prove no such connection exists, surely intelligently placed delays
are worthwhile?
Hence it is essential that we separate the contents of your datastore
from your actual requests (which we will do in 0.9). However, the
property
that finding where data is necessarily propagates it is very useful
from the
point of view of resisting censorship. Which does open the question of
whether bloom filters are a good idea ...
That's a different topic... and to me routing by bloom filters looks a
lot like NG-routing.
-
Back to the best-of-both-worlds (BOBW)... I think that we already
agree that a priority transfer system is needed. What is your plan for
how to accept realtime requests?
--
Robert Hailey
_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl