"roland" <[EMAIL PROTECTED]> wrote:
> > there is also no filesystem based approach in compressing/decompressing a 
> > whole filesystem. you can have 499gb of data on a 500gb partition - and if 
> > you need some more space you would think turning on compression on that fs 
> > would solve your problem. but compression only affects files which are new. 
> > i wished there was some zfs set compression=gzip <zfs> , zfs compress <fs>, 
> > zfs uncompress <fs> and it would be nice if we could get compresion 
> > information for single files. (as with ntfs)
 
one could kludge this by setting the compression parameters desired on the tree 
then using a perl script to walk the tree, copying each file to a tmp file, 
renaming the original to an arbitrary name, renaming the tmp to the name of the 
original, then updating the new file with the original file's metadata, do a 
checksum sanity check, then delete the uncompressed original.
 
-=dave
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to