On Tue, 06 Dec 2005 16:54:13 +0000, Matthew Toseland wrote: > There is no real reason why we can't allow the client access to the block > URIs, if he sets a sufficiently high feedback reporting level.
The more you expose the node internals, the more the tools are going to rely on them staying the same, and the more likely they are going to break when those internals change. The more often everything breaks, the more likely people are to abandon Freenet. It's bad practice to let tools know things like block size. You just know that someone is going to make a tool that handles individual blocks and allocates memory to them without checking for their size, and then starts crashing when the blocksize changes. And you just know that that particular badly coded tool will be the most popular one at the time of any such change, and consequently cause a huge backlash... > Hence, when you do an insert, it will tell you when it starts inserting > each block, what that block's key is. Likewise on a request, if it fails, > it could tell you which segment failed, and which keys from that segment > were successfully fetched (the rest failed). Because of the way FEC works, > it will then be sufficient to reinsert any of the remaining keys in that > segment; in practice we would reinsert all of the remaining keys in the > segment for reliability reasons. > > So: > Fetch knoppix_4.0.1.dvd.iso > Segment size = 128 data blocks, 64 check blocks. Segments 1-27,29-451 > succeeded. > Segment 28 failed: > 192 blocks in segment. > 128 data blocks in segment; need 128 blocks. Fetched 117 blocks. <list of > 117 fetched blocks> Need 11 of the remainder: <list of 75 failed blocks> > > The easiest way to implement the insert end would be simply for the client > to reinsert the entire segment. This would require no node support apart > from the above information; the client could simply chop that part of the > data, and insert it as a splitfile. A more complex option would be to > reinsert specified blocks only from a splitfile. I do not regard that as a > priority. But providing enough information to do the above is easy enough. How large are the segments going to be ? Hmm... with 32 kB blocks and 192 blocks per segment, it makes 6 megabytes. Not optimal, but acceptable. I still think that it should just say "failed from byte (start of failed segment) to (end of failed segment)", to hide the details of segmentation and blocks. > I still don't agree with you on the need for insert on demand btw; a > working freenet would have massive resources, and work far better for > less-popular files than most P2Ps, due to having more places to download > them from. The problem is that all production freenet's so far have had > serious load-versus-routing problems. Hopefully 0.7's algorithms will do > better, since they a) Are based on tried and tested solutions, and b) > Propagate overload back to the source (the client node). Even if Freenet had limitless resources at its disposal, I, a single node operator, do not. Suppose that I want to make every public domain movie available from Freenet in case it gets censored (Night of the Living Dead, for example, could easily be) ? Given that a good quality movie file is at least 700 MB, and given that there would be hundreds of such files, and given that most of them would be accessed very rarely, does it make more sense to upload them all or to simply upload a list of files and upload the files themselves when someone actually wants them ? Also, please note that when Freenet grows and gains more resources, it also gets more requests, which means that most popular content (porn, likely) will get copied to more nodes to cope with the increased load, and consequently least popular content will still be pushed out. Freenet is an anonymizing cache system, not a permanent storage system; and consequently, making it as easy as possible to implement permanent storage system (FTP over Freenet, in practice) on top of it is important.
