> 4 billion is zero work nowadays.  A few hours on a
> cluster.  Matching
> filesize is trivial: compute all but the last
> 160bits of the file you
> are hashing, and save the state.  Now generate
> sequential 160 bit
> patterns for the last block, preloaded with the
> original state.
Yes this is exactly how you would do this attack, pick
the largest acceptable size and then just change the
last block and keep the ones that are "close enough".

2^32 may be a high estimate.  Think of it like this:
if each of the N nodes has on average m
specializations, and these overlap at for a redundancy
r; then each of these specializations have about
r/(m*N) of the keyspace.  This means you have to do
about N*m/r hashs to get one close enough to be in the
specialzation your attacking.  I'll try to pull some
realistic numbers out of butt:
r = 10 that seems fair, gives a bit of redundancy
m = 10 this may be about how many tight bands
neighbors know about, just a guess
N = 10,000
N*m/r = 10,000 hashes have to be done to get 1 close
enough.  Let's say an adversary can do 100,000Hz
160byte hashes on a machine.  He can come up with 10
of these a second.  If the maxsize of the data items
is 32k (not sure about his number) than he can
generate 320kB/s of junk data, that will do the job! 
Keep in mind that after a couple days you can recycle
the old junk data.

If you double the size of the network, but let the
adversary double his resources too, he can still
generate the same amount of junk data.  He just gets
twice as many points to insert from.

 
> Also, the size isn't part of the routing decision,
> right?  Otherwise
> you'd get nodes that ONLY dealt in 1mb keys and
> their bandwidth would
> suck ass.  So you only need near-misses on the hash.
I'm pretty sure they just hash in the size with
everything else.  It can't hurt but I'm not sure it's
nessesary. 

> Again, trivial.  The question is still 'what happens
> next?'.
> If you're _VERY_ careful not to request the document
> you want, it's
> possible to kill any node likely to carry it. 
> Caveat: Any node that's
> not specialized in that data-segment that has cached
> it will still have
> the original document.  This is why imperfect
> routing is a good thing.
If the system cann't route to it at all, it'll die of
old age.  There is however an inbetween where not
enough junk may hit a node to DNS it, and some valid
queries will get answered.  

I wrote a REALLY informal paper a while back that
concluded that as DNS attack grew to a large size A,
the chance of a rigid network doing 1 insert/retrieve
looked like k/A.  Also if you store something under R
hashes, and the requester requests at R hashes, the
attacker has to pay O(R^2) to keep the probablity of
denial constant.  So if you had something important
you could insert multiple copies, or make a split file
with a high redundancy and get great returns.

It's really infromal but if anyone wants to take a
look:
http://de.geocities.com/amichrisde/p1.html

__________________________________________________________________

Gesendet von Yahoo! Mail - http://mail.yahoo.de
Logos und Klingelt�ne f�rs Handy bei http://sms.yahoo.de
_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to