> I've covered this a few times. All we need is to add filtering to > Request. Add a field to Request that contains a list of CHKs. The node > will ignore any key matches if matched file has the same CHK was one in > the list. That way it will be _exactly_ like sending a normal request, > smart routing and all, but it simply won't see the files that it isn't > supposed to see.
Yes, this would be a way to address multiple documents, but it would be a very inefficient way to access versions of documents. My gut feeling though is that if people can't rely on a one-to-one mapping between keys and data then Freenet will become rather annoying to use (I can just imagine having downloaded my fifth MP3 and still not having found the one I am looking for - namely "mp3/u2/achtung baby/I Still Haven't Found What I'm Looking For" ;). I don't see the problem with what I proposed - > > The way I would approach this gives the same functionality, but doesn't > > require such an upheaval in the way things operate. Each new version of > > a document would be stored under a key like "documentname/0.2", and then > > the key "documentname" would redirect to the latest version (this > > obviously required that we have implemented data modification and > > redirection - both of which are on the radar). > > This is a pretty nice way of doing things, assuming we get working > updatable data. > > My solution isn't just for newer versions, but for combatting spam, > too. And it keeps the exact same routing as current requests. But is spam really being combated effectively if it causes you to need to keep downloading stuff until you get through the spam? Ian. _______________________________________________ Freenet-dev mailing list Freenet-dev at lists.sourceforge.net http://lists.sourceforge.net/mailman/listinfo/freenet-dev
