On Thursday 24 July 2003 12:36, Michael Schierl wrote:
> Toad schrieb:
> > Changes (a ton, mostly not mine):
> > * Implemented support for ZIP containers (fish's work, slightly
> >   tweaked by me). Supported by the client level code, with a flag to
> >   disable them, so fproxy, client.cli.Main and everything else can use
> >   them. client.cli.Main has a command line option to disable support.
> >   Includes support for metadata in the zip. Effect of this is that a
> >   site of up to 1MB (after compression) can be inserted as one file, and
> >   it will be automatically extracted - get one file, get them all.

Call me skeptical, but I think this is an amazingly bad idea. It removes any 
concept of having redundant date de-duplicated automatically. Also, 
downloading 1 MB file will potentially take quite a while. Smaller files can 
be downloaded with a greater degree of parallelism. I am simply not convinced 
that partial availability is a problem with a properly routed node, and that 
is all this will achieve. In a way, I think this will make the problem worse, 
because if the entire file cannot be retrieved or re-assembled, then the 
whole site is unavailable, rather than perhaps a few small parts of it.

Additionally, it means that even if you want to look at one or two pages of a 
100 page site, you still have to download the entire site. This is clearly 
netiher sensible nor sustainable in the long run, as more bigger sites come 
to exist.

I don't think progres in this direction should be encouraged by putting hooks 
for it in the node code. It just seems to me like a terrible idea.

One aspect of this that I DO support is the compression. I think all files 
inserted into the network should be compressed using something nice and 
effective like BZIP. This could belimited to exclude certain pre-compressed 
MIME types, e.g. zip, jpg, gif, jar, etc. I think it would deliver a fairly 
noticeable improvement in terms of bandwidth usage, without putting more load 
on the network. The files would get exchanged in the same way as ever, but 
they would now be smaller. The load is only put on the node inserting the 
files at insertion time (compress each file before inserting) and the node 
retrieving the files (decompress before passing to fproxy). This would be 
particularly effective for HTML and text documents. It would help with both 
the amount of storage space the network provides and the bandwidth required 
to transmit the files.

Regards.

Gordan
_______________________________________________
devl mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl

Reply via email to