On Wed, Jun 23, 2010 at 5:43 PM, Matthew Toseland
<t...@amphibian.dyndns.org> wrote:
> On Wednesday 23 June 2010 20:33:50 Sich wrote:
>> Le 23/06/2010 21:01, Matthew Toseland a écrit :
>> > Insert a random, safe key
>> > This is much safer than the first option, but the key will be different 
>> > every time you or somebody else inserts the key. Use this if you are the 
>> > original source of some sensitive data.
>> >
>> >
>> Very interesting for filesharing if we split the file.
>> When some chunk are lost, you have only to reinsert those who are
>> lost... But then we use much datastore... But it's more secure...
>> Loosing datastore space is a big problem no ?
>
> If some people use the new key and some use the old then it's a problem. If 
> everyone uses one or the other it isn't. I guess this is another reason to 
> use par files etc (ugh).
>
> The next round of major changes (probably in 1255) will introduce 
> cross-segment redundancy which should improve the reliability of  really big 
> files.
>
> Long term we may have selective reinsert support, but of course that would be 
> nearly as unsafe as reinserting the whole file to the same key ...
>
> If you're building a reinsert-on-demand based filesharing system let me know 
> if you need any specific functionality...

The obvious intermediate is to reinsert a small portion of a file.
The normal case is (and will continue to be) that when a file becomes
unretrievable, it's because one or more segments is only a couple
blocks short of being retrievable.  If you reinsert say 8 blocks out
of each segment (1/32 of the file), you'll be reinserting on average 4
unretrievable blocks from each segment.  That should be enough in a
lot of cases.  This is probably better than selective reinsert (the
attacker doesn't get to choose which blocks you reinsert as easily),
though it does mean reinserting more blocks (8 per segment when merely
reinserting the correct 3 blocks might suffice).

The simple defense against a mobile opennet attacker that has been
proposed before would be particularly well suited to partial
randomized reinserts.  The insert comes with a time (randomized per
block, to some time a bit before the reinsert started), and is only
routed along connections that were established before that time, until
it reaches some relatively low HTL (10?).  This prevents the attacker
from moving during the insert.  On a large file that takes a long time
to insert, this is problematic, because there aren't enough
connections that are old enough to route along.  For a partial
reinsert, this is less of a concern, simply because it doesn't take as
long.

Evan Daniel
_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to