Not a bad idea.  I'm not sure that the way you notify of diminished shares
existing is necessarily a good one, but the idea of dropping large files
in this way isn't bad.  Also, it may not be as effective when we have file
splitting.

But assuming we did something like this, it might be a better idea to have
the nodes report what shares they have of a file when you have a failed
request.  So as the RequestFailed propogates backwards, each node reports
what share it has.  Each node receiving a list like this adds his/her own
shares to the list (eliminating duplicates).  Two possibilities then
occur:

1) The user can get this list and attempt to request each individual share
as its own key (enough shares to reconstruct that is)

2) When any node on the chain sees enough parts to do this, it
automatically spawns the inserts and is responsible for reconstructing
before passing the whole document back down the chain.  This has the
advantage of reintroducing the whole file into the network, rather than
having files make a progression from whole to split to dead with no
possibility of going the other way.

        Scott


> OK, this is my first post of freenet-devl, and possibly my last, but it's
> an idea that just occurred to me and might be worth considering.
> 
> Um, OK, first motivation, then.  The problem I'm looking at is the fact
> that freenet's redundancy eats space.  To store 1TB of stuff on freenet, at
> say 6x redundancy (i.e. each file is cached at an average of 6 nodes--a
> fairly low level), you'll need 6TB of space.  That gets somewhat ridiculous
> as freenet grows.  I was trying to come up with a way to do freenet-ish
> stuff with secret sharing, like the Intermemory project uses
> (intermemory.org), and I think I have an idea that may be worth
> considering.
> 
> First, a word on secret sharing.  In case you never heard of it, the trick
> is to take a file of size N and break it up into n pieces of size N/m such
> that any m pieces can reconstruct the file but m-1 pieces give you no
> information about it.  n>m and can be much bigger, if so desired.  Look up
> how it can be done, trust me, it's possible.
> 
> Now, how to build this into freenet, hopefully without mangling the current
> setup much?  Here's what I thought:
> 
> In general, your node goes through its life normally, caching stuff up to
> its limit and holding whole (encrypted) files in its store.  But when it
> needs to free up space, instead of deleting the old files, it could move
> them instead into "diminished storage."  What that means is that it would
> do a secret-sharing split on the file (as described below), and retain
> *one* share of it (for the sake of argument, it does a split that requires
> 10 pieces to reconstruct and saves one, thus freeing up 90% of the space.
> It doesn't have to compute all the other shares to do this, by the way, and
> there are potentially a LOT of other shares).  It stores this fragment
> specially, so it knows the hash of the file it originally came from.  Then,
> if a subsequent request comes in for the file, it can say "Well, I don't
> have that file, so I'll pass the request on, in case someone else has the
> whole thing.  But I'm also sending back this piece of it; if other nodes
> along this chain also discarded it, they might still have pieces and you
> can reconstruct it."  (I don't know the protocol of freenet very well, I
> don't know how radically it would have to be changed to allow this sort of
> thing, sending back a response BUT the request keeps going.) 
> 
> The split is done so that other nodes will split it the *same* way, but
> keep different shares.  That way, if it gets into diminished storage in a
> lot of places, they can be used to reconstruct it.  How do we do this?
> Well, there are different ways of doing secret sharing.  In one of them,
> you use random numbers to make up hyperplanes that intersect at the point
> in hyperspace that represents your secret.  You need enough planes to
> determine a point.  You can feed a hash of the file (probably a different
> hash than the CHK; you can get that by using a different hashing algorithm
> or concatenating some known string to the end of the file before hashing)
> into a pseudo-random number generator to get such coefficients, and then
> pick a random plane.  Someone else splitting will get the same coefficients
> but pick a different plane.  Other methods also work (use interpolating
> polynomials and pick a random point as your share, etc).  There are details
> to take care of (splitting the file into chunks to be shared and then
> stringing the shares together, etc), but I think it could work in theory.
> 
> This might help alleviate some of the concerns that spam-crawling could eat
> freenet as requests trash files on their way down the chain.
> 
> Obviously, there'll have to be some heuristics for deciding when to move
> files into diminished storage and when to delete them or delete shares,
> etc.  We don't want all the files in freenet to wind up being
> secret-shared, or it will take forever to retrieve them!
> 
> I'm not on this list (so this post might not make it through), but I'll try
> to watch the archives for responses.  And/or respond directly to me.
> 
> Sorry if I'm talking out of my nether regions; this idea just struck me and
> I had to tell someone about it.  What do you think?
> 
> ~mark
> 
> _______________________________________________
> Devl mailing list
> Devl at freenetproject.org
> http://lists.freenetproject.org/mailman/listinfo/devl

-- 
loggers melt cry riot
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20010608/ca63c30d/attachment.pgp>

Reply via email to