On Fri, Oct 31, 2003 at 12:43:51AM +0100, Some Guy wrote:
>  --- Tom Kaitchuck <[EMAIL PROTECTED]> wrote: 
> > On Thursday 30 October 2003 03:20 pm, Tom Kaitchuck wrote:
> > > On Thursday 30 October 2003 01:46 pm, Toad wrote:
> > > > On Thu, Oct 30, 2003 at 01:15:23PM -0600, Tom Kaitchuck wrote:
> > > > > Why not? For CHK:
> > > > > [EMAIL PROTECTED],<hash>,<decrypt key>
> > > > > where <Decrypt key> decrypts
> > > > > and H(hash) routes
> > > > > and H(hash+XXX) verifies.
> > > > > All you have to send is hash and XXX.
> > > > > For SSK:
> > > > > [EMAIL PROTECTED],<key>,<name>
> > > > > where <key> decrypts
> > > > > and H(H(key+name)) routes
> > > > > and H(H(key+name)+XXX) verifies.
> > > > > All you have to send is H(key+name) and XXX.
> > > > >
> > > > > Why wouldn't this work?
> > > >
> > > > Because if XXX is common, the attacker only needs to compute it once.
> > 
> > Yeah, you right I'm an idiot. :-) 
> No you're not.  If you or Toad or I came up with this, we rock.  The problems are 
> fixable.
> 
> > But if you route based on the h(hash+xxx) then you can't keep upping XXX. :-(
> > and because we are limited to a fixed table size they only need to generate 
> > that many keys. :-(
> > So in the end anyone with access to a hundred or so PCs could censor any 
> > content in Freenet within a couple of weeks :-(
> > So anyone have any other idea?
>  
> I always thought of Hash Cash working like this.  You pick your route, and then you 
> pay the toll.
> 
> What we've discovered today is:
> If you base the route only on the hashcash and things the hashcash's validity is 
> based on, the
> adversary is forced to pay first and then get the routing key.
> That's a big Ah-ha idea for me.
> 
> I hadn't thought of including the hash cache with the URL at all.  I thought we'd 
> make both sides
> pay.  They use about the same bandwidth resources.  Then again it seems kind of 
> natural to make
> the inserter pay to store stuff.
> 
> Good spotting with the SSK problem Toad.  Here's some ways around the problem:
> 1) don't include the hash cash with the URL, make both sides do the work
> 2) route everything in the same subspace to the same place, limit update 
> rate/subspace
> 3) make everyone use global links
> 4) make an insertion tool which makes the global links and does the hash cache once
> 4 is probably best.

What's a global link?
We need SSKs to be SSK@<opaque fixed>/<human readable> - for things like
for example DBRs.
> 
> ----Scaling up key sizes and what not
> 
> Ok, so CPU power will get cheaper.  If memory/diskspace gets cheaper in proportion 
> it won't
> matter.  Today 1*CPU second is worth 32KB space.  In 4 years 0.2CPU*s might be worth 
> 32KB of
> space.
> 
> Ok, what about some node farm creating tons of junk data, won't he eventually have 
> enough to do
> his DNS attack?  If you have some religious belief in Moore's Law (I don't) you 
> could argue that
> no matter what CPU time today will be worth a certain amount of memory for all 
> eternity.  You just
> have to assume memory prices in the long haul decrease exponentially.

I pragmatically believe in Moore's law because it has worked for the
last 40 years.
> 
> Here's a fun idea:  Could we take the routing key and before using it run it through 
> a special
> function which changes slowly over time, in a yet to be determined fasion?  For 
> example in the
> future take some of the insignificant bits and mutiply them by constants and add 
> that to the
> original hash.  The idea would be that objects wander around in hashspace just a 
> little over
> different versions of freenet.  In a weeks time an object might only move a node or 
> two, but in a
> few years it could be completely somewhere else.  This should make such adversary's 
> stock pile of
> ammo deteriorate, hopefully faster than he can generate it.

I don't see why this would be useful, it would probably weaken
specialization. The idea is not to use the hashcash in routing, just to
verify it on each node.
> 
> We could incrementally change hash cash formats.  If it wasn't in the URL, or old 
> URLs just force
> the user to calculate the hash cache himself the system could slowly invent newer 
> harder cashes
> and devaluate the old ones.  
> Make the 10 bit caches standard and drop the 8 bit ones.
> Later make 11 bit standard and drop the 9 bit ones.
> I could try my request on all the accepted versions and if I succeed reinsert under 
> the newest
> version.

Right. We would have a HashCashNotLongEnough message (or add another field
to QueryRejected), and a default and maximum hashcash difficulty level on
the node.
> 
> We might also think about using better hash cache to give extra priority to certains 
> posts that
> should stay longer or go deeper, and requests which should go faster and farther.
> 
> 
> Guys this conversation rocked!!!  Sorry I always miss the good bits.  Now all we 
> have to do is fix
> the cancer node problems and iron this stuff out.

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to