On Sun, 3 May 2009 01:46:37 -0400, Juiceman wrote:
> On Sat, Apr 25, 2009 at 2:23 PM, Matthew Toseland
> <[email protected]> wrote:
> > On Friday 24 April 2009 17:46:09 [email protected] wrote:
> >> 1) CHK-keys are already long enough
> >
> > Long enough to be a PITA if they are longer? Or long enough to be
> > functional? I dispute the latter.
> >
> >> 2) why add something that tries to fix something broken (routing?)
> >> or contradicts the concept (caching of keys around the key
> >> location; unused content gets dropped)
> >
> > Routing is not broken. Data persistence is broken. The feedback I
> > have had is that frequently the problem with fetching a file is the
> > top block, which currently is not redundant. For example it might
> > take 3 weeks at 0% to find the top block and then make relatively
> > rapid progress after that. This is not your experience?
> >>
> >> if a) unwanted content is supposed to be dropped from the network
> >> to make space for fresh stuff and b) the top key is *needed* for
> >> *every* request of a ((larger) split-) file, how can the top key
> >> possibly fall off the network?
> >
> > If the key is not requested, it could well fall off the network,
> > *even though the rest of the file is still retrievable*. Because
> > the rest of the file is redundant and the top block is not. We have
> > very large datastores and generally very low input/output
> > bandwidth, so I would expect data to persist for a long time in the
> > current Freenet. Data that has not been recently requested will
> > only be in the stores, and not in the caches. But since routing
> > appears to work (and there is every theoretical reason to think it
> > does), there is a good chance of finding the data - IF the 3 nodes
> > which stored it to their datastores are online when you fetch and
> > there aren't any problems contacting them (e.g. on darknet they
> > might have swapped).
> >>
> >> IMHO I think this is making extra effort and adding
> >> YetJustAnotherKeyType for CreateAWorkaroundForSomethingDifferent
> >> for something that needs to be addressed elsewhere
> >
> > I don't see much evidence there is a fundamental problem with
> > routing in 0.7 on opennet. Do you have any evidence for this? I
> > would be very interested in any evidence that this is a routing
> > rather than a redundancy problem.
> >
> > Nodes' graphs usually show a fairly strong request specialisation,
> > the main symptom is that 2-3 weeks after inserting data, it is not
> > retrievable, if nobody has fetched it. And a lot of the time, that
> > specifically relates to the top block. This proposal would solve
> > the problem.
> >
> 
> If we have, say, 3 top CHK's or RHK (Redundant Hash Key is my vote)
> and we are able to find one of them, we should be able to recreate and
> reinsert the other two, right?

Yup. I'm pretty sure the redundant CHKs would be simple modifications to
the hashing math -- probably with specified constants added to it.

I'm still not crazy about introducing a new key though. Would it really
be better than simply sending more copies of the top block during the
initial insert (6 or 9 instead of the original 3?) and reinserting it
again a few times when another user downloads it?

> To prevent these reinserts giving away that we retrieved the file, we
> just need a queue of keys to be inserted (I assume we have this for
> FEC healing?).  Grab a random key from the queue each time we have a
> slot to insert.

Well, if anyone had that kind of access to your node (to the internal
queues, or even the datastore to a lesser extent), it would already be
too late, no?

> I would suggest implementing this before going after the shared bloom
> filters idea.  From a usability standpoint this would solve some of
> the glaring problems like unfetchable pages, pictures and files.

Agreed.
_______________________________________________
Support mailing list
[email protected]
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:[email protected]?subject=unsubscribe

Reply via email to