On Thu, 11 Sep 2003, Nick Tarleton wrote:

> On Thursday 11 September 2003 02:44 pm, Dan Merillat wrote:
> > Interesting.  That's basically zero work, especially if you decide to
> > "only" match 20 bits.
> 4 billion isn't zero work, keeping in mind that you have to match the 
> rounded-up log2 of the file size as well. 1 million is better but still may 
> have issues.

4 billion is zero work nowadays.  A few hours on a cluster.  Matching
filesize is trivial: compute all but the last 160bits of the file you
are hashing, and save the state.  Now generate sequential 160 bit
patterns for the last block, preloaded with the original state.

Also, the size isn't part of the routing decision, right?  Otherwise
you'd get nodes that ONLY dealt in 1mb keys and their bandwidth would
suck ass.  So you only need near-misses on the hash.

Again, trivial.  The question is still 'what happens next?'.
If you're _VERY_ careful not to request the document you want, it's
possible to kill any node likely to carry it.  Caveat: Any node that's
not specialized in that data-segment that has cached it will still have
the original document.  This is why imperfect routing is a good thing.

--Dan

Attachment: pgp00000.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to