On Thu, Oct 30, 2003 at 03:20:21PM -0600, Tom Kaitchuck wrote:
> On Thursday 30 October 2003 01:46 pm, Toad wrote:
> > On Thu, Oct 30, 2003 at 01:15:23PM -0600, Tom Kaitchuck wrote:
> > > Why not? For CHK:
> > > [EMAIL PROTECTED],<hash>,<decrypt key>
> > > where <Decrypt key> decrypts
> > > and H(hash) routes
> > > and H(hash+XXX) verifies.
> > > All you have to send is hash and XXX.
> > > For SSK:
> > > [EMAIL PROTECTED],<key>,<name>
> > > where <key> decrypts
> > > and H(H(key+name)) routes
> > > and H(H(key+name)+XXX) verifies.
> > > All you have to send is H(key+name) and XXX.
> > >
> > > Why wouldn't this work?
> >
> > Because if XXX is common, the attacker only needs to compute it once.
> 
> The attacker needs to compute it once PER KEY! this means they can use the 
> same key over and over, but the failure table should prevent that from doing 
> too much damage. My main consearn is that they would be able to accumulate 
> large numbers of keys over time and then use them all at once and then wait 
> for the failure table to expire and repeat the attack. 

Not for SSKs. XXX is constant! He'd only have to brute force the hash
for SSKs.
> 
> The amount of CPU time it would take to pull off the attack is:
> (Time it takes to generate one key)*2^(number of bits they are trying to brute 
> force)*(the RPH processed by the aria they are attacking)*(failure table time 
> in hours)
> or to attack a single node:
> (Time to generate a key)*(size of network)*(average RPH)*(Failure table time)
> or 
> (Time to generate a key)*(size of network)*(Failure table MaxSize) Whichever 
> is less. (So the attack is still possible. It is just harder.)
> 
> So over time we can keep the Time to generate a key constant with hardware, 
> which in turn will enable nodes to process more QPH, and the size of the 
> network will grow. However it would still be good to make generating the keys 
> less common, because then we can increase the time to generate, and 
> Dramatically increase the Failure table time/size.
> 
> > > > A further problem is that you need to be able to increase the required
> > > > amount of hashcash over time, as machines become able to do more
> > > > hashcash - this is NOT good for URLs!
> > >
> > > Well at least it would not affect routing. If at first we allocate 3
> > > characters to XXX then that means we can make them do 2^18 hashes. If
> > > that is not enough goto 10 that's 2^60 hashes. Even if the average
> > > computer doubles in speed every 18 months we should only need to add one
> > > character to the URI every 9 years.
> >
> > The problem is we need to keep the time to compute it reasonable within
> > current technology, while not letting it get too low as to not be
> > useful. That means we need to update it regularly. Hmm, maybe we don't
> > have to route by XXX though, yeah. We could route by the existing key
> > and just verify XXX, which is derived from the routing key amongst other
> > things? Interesting.
> 
> Yes, routing always works by H(hash) or H(Hash+key) so in the feature you can 
> up the minsize of XXX to require stronger minimum verification without 
> affecting routing at all. Additionally the old version would be froward 
> compatable and accept the new versions verification, you just have to decide 
> when to update the minimum and no longer accept the previous number of bits. 
> 
> _______________________________________________
> Devl mailing list
> [EMAIL PROTECTED]
> http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to