Suppose one is to sabotage Freenet with this scheme.

Malicious node eats anything it gets, while letting other nodes think "it
went okay", and sends any incoming data straight to >/dev/null

Redundancy is necesary to stop such a thing

Frank

By the way, as far as I can tell you ARE on the list :-)


----- Original Message -----
From: "Mark E. Shoulson" <[email protected]>
To: <devl at freenetproject.org>
Sent: Friday, June 08, 2001 5:58 PM
Subject: [freenet-devl] Secret Sharing


> OK, this is my first post of freenet-devl, and possibly my last, but it's
> an idea that just occurred to me and might be worth considering.
>
> Um, OK, first motivation, then.  The problem I'm looking at is the fact
> that freenet's redundancy eats space.  To store 1TB of stuff on freenet,
at
> say 6x redundancy (i.e. each file is cached at an average of 6 nodes--a
> fairly low level), you'll need 6TB of space.  That gets somewhat
ridiculous
> as freenet grows.  I was trying to come up with a way to do freenet-ish
> stuff with secret sharing, like the Intermemory project uses
> (intermemory.org), and I think I have an idea that may be worth
> considering.
>
> First, a word on secret sharing.  In case you never heard of it, the trick
> is to take a file of size N and break it up into n pieces of size N/m such
> that any m pieces can reconstruct the file but m-1 pieces give you no
> information about it.  n>m and can be much bigger, if so desired.  Look up
> how it can be done, trust me, it's possible.
>
> Now, how to build this into freenet, hopefully without mangling the
current
> setup much?  Here's what I thought:
>
> In general, your node goes through its life normally, caching stuff up to
> its limit and holding whole (encrypted) files in its store.  But when it
> needs to free up space, instead of deleting the old files, it could move
> them instead into "diminished storage."  What that means is that it would
> do a secret-sharing split on the file (as described below), and retain
> *one* share of it (for the sake of argument, it does a split that requires
> 10 pieces to reconstruct and saves one, thus freeing up 90% of the space.
> It doesn't have to compute all the other shares to do this, by the way,
and
> there are potentially a LOT of other shares).  It stores this fragment
> specially, so it knows the hash of the file it originally came from.
Then,
> if a subsequent request comes in for the file, it can say "Well, I don't
> have that file, so I'll pass the request on, in case someone else has the
> whole thing.  But I'm also sending back this piece of it; if other nodes
> along this chain also discarded it, they might still have pieces and you
> can reconstruct it."  (I don't know the protocol of freenet very well, I
> don't know how radically it would have to be changed to allow this sort of
> thing, sending back a response BUT the request keeps going.)
>
> The split is done so that other nodes will split it the *same* way, but
> keep different shares.  That way, if it gets into diminished storage in a
> lot of places, they can be used to reconstruct it.  How do we do this?
> Well, there are different ways of doing secret sharing.  In one of them,
> you use random numbers to make up hyperplanes that intersect at the point
> in hyperspace that represents your secret.  You need enough planes to
> determine a point.  You can feed a hash of the file (probably a different
> hash than the CHK; you can get that by using a different hashing algorithm
> or concatenating some known string to the end of the file before hashing)
> into a pseudo-random number generator to get such coefficients, and then
> pick a random plane.  Someone else splitting will get the same
coefficients
> but pick a different plane.  Other methods also work (use interpolating
> polynomials and pick a random point as your share, etc).  There are
details
> to take care of (splitting the file into chunks to be shared and then
> stringing the shares together, etc), but I think it could work in theory.
>
> This might help alleviate some of the concerns that spam-crawling could
eat
> freenet as requests trash files on their way down the chain.
>
> Obviously, there'll have to be some heuristics for deciding when to move
> files into diminished storage and when to delete them or delete shares,
> etc.  We don't want all the files in freenet to wind up being
> secret-shared, or it will take forever to retrieve them!
>
> I'm not on this list (so this post might not make it through), but I'll
try
> to watch the archives for responses.  And/or respond directly to me.
>
> Sorry if I'm talking out of my nether regions; this idea just struck me
and
> I had to tell someone about it.  What do you think?
>
> ~mark
>
> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://lists.freenetproject.org/mailman/listinfo/devl


_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to