Re: [freenet-support] ZIPs :p
Tld <[EMAIL PROTECTED]> writes: > [EMAIL PROTECTED] wrote: > > RAR > > - no standard (sigh), no libraries, maybe no documentation or sources to be able >to reimplement it? > > + 100 files to one > > + best compression > > so my vote goes to ZIP which is nearly the same as RAR, is > > cleaner than JAR and combines the best of TAR and GZ, without any > > drawbacks, comes right with java and ist widely spread > > Well, I want to say something on that. > > ZIP do NOT have the best of TAR and GZ. .tar.gz will almost often be > better than .zip because .zip does not consider files as a unit: the > archive is conceptually the union of small one-file compressed > archives. > Wow, I sure was unintellegible. > Actually, this is a good thing for freenet because almost always we are extracting a single file, or a couple files out of the archive; with tgz, fproxy would have to decompress the entire archive to get at a single file, whereas ZIP archives are much easier to pull a single file out of. > ZIP files' central directory resides at the end of file (as you can > see when you have a multi-part zip file), which means that if the end > is lost you might have hard times rescuing the contents. > As for the RAR files (from the quick view of the files), the directory > is interleaved with the files, which means that you can start reading > files as the archive is coming through. > I agree that having the directory at the end of the file is annoying because you have to wait for the complete download before getting a file at the beginning. > Also, RAR files have the ability to have redundancy (in case of loss > of some data) and the solid archiving which ZIP files miss so much. > I don't think we're going to go with RAR's redundancy, which is made for error correction, and much less efficient than erasure codes when you can easily verify the integrity of each piece. As for solid archiving, I've already presented my case against it. > As for the standard/implementation/libraries, the decompressor can be > downloaded from http://www.rarlab.com/rar/unrarsrc-3.1.0.tar.gz, C++ > source under freeware licence. This means you can decompress natively > under almost any platform, but for compressing you still have to > resort to the official RAR program (not everyone might be willing to > use some IA32+MS-DOS emulator to build archives), which still leaves > room for "volunteers" to grab your files off of FN, pack them, > re-upload them (maybe a bot?), then you check the archives' content > and send them under SSK. Loads of work, nonetheless :P > Also, I'm not sure a java version can be built from the C++ version, > perhaps because of license issues, but IANAL and really don't know if > it would be possible (maybe a mail to the author of the sw might help > with that). > Re-implementing the rar decompressor in java isn't something I'd like to do. And without that, there's no way for rar to be worked into fred. > In the end, I say that in freenet it should not matter the speed of > the decompression, it should the size of the archive: the normal > throughput of the 'net will most probably be about 4 (or more!) orders > of magnitude of the decompression algo. Also, smaller files means more > space available for storage, which in turn raises the chance of a key > being held. There's not a big difference in file sizes between solid archiving and loose archiving. The amount of time spent decompressing is irrelevant if 1) it can be done incrementally, as data comes in, and 2) it's faster than the rate data's coming in. If we're arguing for better compressors, I wonder why .bz2 hasn't come up yet. > Hence, I'd say go for the best compressor available out there (to my > experience it's RAR, but the issues might be just too many to allow > its use) and implement that. Or, build a layer for compressors and > allow future extensions (like, start with .zip and eventually add > another (de)compressor in another version), but that might bring to > freenet dialects, which is something to avoid, esp. in the first time. > of course this is all going to be pluggable, have you been paying attention to the pattern in fred where *everything* is pluggable? > 'nuff said. Back to my crypt. > -- > --- TLD Thelema -- E-mail: [EMAIL PROTECTED] Raabu and Piisu GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7 84B7 D8D7 6ECE 3635 2AAB ___ support mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support
Re: [freenet-support] ZIPs :p
[EMAIL PROTECTED] wrote: RAR - no standard (sigh), no libraries, maybe no documentation or sources to be able to reimplement it? + 100 files to one + best compression so my vote goes to ZIP which is nearly the same as RAR, is cleaner than JAR and combines the best of TAR and GZ, without any drawbacks, comes right with java and ist widely spread Well, I want to say something on that. ZIP do NOT have the best of TAR and GZ. .tar.gz will almost often be better than .zip because .zip does not consider files as a unit: the archive is conceptually the union of small one-file compressed archives. Wow, I sure was unintellegible. ZIP files' central directory resides at the end of file (as you can see when you have a multi-part zip file), which means that if the end is lost you might have hard times rescuing the contents. As for the RAR files (from the quick view of the files), the directory is interleaved with the files, which means that you can start reading files as the archive is coming through. Also, RAR files have the ability to have redundancy (in case of loss of some data) and the solid archiving which ZIP files miss so much. As for the standard/implementation/libraries, the decompressor can be downloaded from http://www.rarlab.com/rar/unrarsrc-3.1.0.tar.gz, C++ source under freeware licence. This means you can decompress natively under almost any platform, but for compressing you still have to resort to the official RAR program (not everyone might be willing to use some IA32+MS-DOS emulator to build archives), which still leaves room for "volunteers" to grab your files off of FN, pack them, re-upload them (maybe a bot?), then you check the archives' content and send them under SSK. Loads of work, nonetheless :P Also, I'm not sure a java version can be built from the C++ version, perhaps because of license issues, but IANAL and really don't know if it would be possible (maybe a mail to the author of the sw might help with that). In the end, I say that in freenet it should not matter the speed of the decompression, it should the size of the archive: the normal throughput of the 'net will most probably be about 4 (or more!) orders of magnitude of the decompression algo. Also, smaller files means more space available for storage, which in turn raises the chance of a key being held. Hence, I'd say go for the best compressor available out there (to my experience it's RAR, but the issues might be just too many to allow its use) and implement that. Or, build a layer for compressors and allow future extensions (like, start with .zip and eventually add another (de)compressor in another version), but that might bring to freenet dialects, which is something to avoid, esp. in the first time. 'nuff said. Back to my crypt. -- --- TLD "There is no Good, one thorough, there is no Evil, there is only Flesh" [Pinhead] ___ support mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support
[freenet-support] ZIPs :p
>Regarding zip files (you might want to avoid 'jar' which means Java >archive)... well, okay, we have some archive formats to choose: JAR + hmm.. no improvements to ZIP i suppose: - same as ZIP, redundant - has the directory META-INF in it, which some nerd may abuse to locate /some/mystuff.zip/META-INF/some - java only, but if ignoring the ".jar" is simply a ZIP ZIP + standard in any language on any platform + 100 files to one + compresses + comes with java TAR - semi-standard (linux only) but may be reimplemented + 100 files to one TARGZ - semi-standard with a much larger amount to reimplement + 100 files tp one + compresses RAR - no standard (sigh), no libraries, maybe no documentation or sources to be able to reimplement it? + 100 files to one + best compression so my vote goes to ZIP which is nearly the same as RAR, is cleaner than JAR and combines the best of TAR and GZ, without any drawbacks, comes right with java and ist widely spread >... >> Redirect.Target=freenet:SSK@some/some//mycommon.zip/next.gif >> Name=next.gif >> ... > >Why not simplify this and avoid redirects altogether... > >Let's use the file extension .FproxyArchive > >Fproxy should... > >- look for *.FproxyArchive/* >- Use freenet to retrieve *.FproxyArchive >- Fproxy should cache it >- It should serve to the client the requested file from within the >archive well, this is really a hack ;) combining mime and fileext would result to the approach i posted last week loosing mime would complicate determination of the archive's internal format (zip, some other plug-in-encoder, format-XYZ?) loosing fileext would not harm, as long as the mime is present and correct determining a file's content on it's filename is bad IMHO, but mimes may be tampered, too, so i have no other argument as "hack which makes us loose other comressions if we do not use another fileext" and "dirty hack, dirty hack, dirty hack" >This may provide less flexibility, but it is easy to understand, results >in fewer files does it?! >and would be easier for insertion wizards and so on. no it does not. inserting a line of Info.Format or Info.UseAsABundle into the map file or simply renaming the file is nearly the same from the program's view >The is just an idea, and I realise it may seem like a hack putting this >in the key rather than expanding the metadata that we already have... >but it is also the simplest way, and I think other schemes could do with >being judged against this as a baseline. In order to work with all key >types, it might be best that ".FproxyArchive" was appended when why "FproxyArchive" ?? is it "Fproxy FEC" no.. bundling files to archives is completely unrelated to fproxy, so please do not use it's name for it! >referring to the key, but not actually expected to appear in the >retrieved key name. > >Regards, >Greg Harewood nice to have another idea :) (although i do not like it very much :D) i would like to see some comments from ian and oscar, but they only seem so ignore the support mailing list, without knowing thar be little suckers have no way to post our mails to tech or devl. matthew's opinion would be nice, too, about his ZIP@ and his opinion on metadata modifcation The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer. ___ support mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support