Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-11-03 Thread Mike Bombich
I wouldn't blame the rsync team for not wanting to maintain it, it's a  
pretty narrow-scope patch affecting only one OS.  I'm pretty motivated  
to keep it up, though, so I'll repost my patches to this list when I  
update them.  I'll probably get it updated to 3.1.0 in the next month  
or so.


Mike

On Nov 1, 2009, at 7:57 PM, Tony wrote:



Mike, thanks for the patch.  Will this patch be maintained in rsync- 
patches-3.0.6.tar.gz ?


On Oct 28, 2009, at 1:20 AM, Mike Bombich wrote:

HFS compression can be preserved as long as the relevant xattr(s)  
and flags on those files are preserved.  A compressed file has the  
compressed data in a hidden xattr (com.apple.decmpfs if  4Kb,  
com.apple.ResourceFork if more), and has the UF_COMPRESSED flag set  
(decimal 40).  When rsync encounters a file like this, it should  
ignore the data fork of the file, which will appear to contain  
normal, uncompressed data.  It should also pass a special flag to  
the xattr calls to expose the decmpfs xattrs.


I've already implemented this in rsync (3.0.6), I just hadn't taken  
the time to craft the HFS-compression-specific changes into a  
patch.  I did that this evening and attached it below.  These are  
changes against the 3.0.6 base, plus the crtimes, fileflags, and  
backup-dir-dels patches.  It should work, at minimum, against the  
3.0.6 base plus the fileflags patch (that patch is required).


Let me know if it doesn't work for you, it's entirely possible that  
I overlooked something in the extraction.


Mike

rsync_3.0.6-hfs-compression_20091027.diff


On Oct 27, 2009, at 11:08 PM, Matt McCutchen wrote:


On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:

When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set on the destination and the destination file is not
compressed.

I examined the destination file as described in ars technica   
(with ls
and xattr from a 10.5 Leopard boot), and the compressed data is  
moved
from the resource fork to the data fork, and the extended  
attributes

'@' are removed from the file.

As far as I know, only ditto in 10.6 can handle HFS+ File
Compression.  (I even tested a 'clone' with disk utility (file  
copy,

not block), and it also failed (block copy, of course works).


Rsync is just reading and writing files via the filesystem API; it  
has

no access to any of the flags or xattrs used to implement the
compression.

I guess the filesystem doesn't compress new files by default.  If  
it had
an API to request compression, rsync could use that API when  
writing the
destination files.  Unfortunately, the API ditto is using appears  
to be
private to Apple.  See the post from brkirch beginning The first  
thing

that I tried to do on this page:

http://www.macosxhints.com/article.php?story=20090902223042255

So anyone interested in making rsync compress the destination files
would probably have to copy the relevant code from afsctool.  This  
could
be shared as a patch; I feel quite sure it would not be adopted in  
the

main version of rsync.

--
Matt

--
Please use reply-all for most replies to avoid omitting the  
mailing list.

To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html




--
Please use reply-all for most replies to avoid omitting the mailing  
list.

To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-11-01 Thread Tony


Mike, thanks for the patch.  Will this patch be maintained in rsync- 
patches-3.0.6.tar.gz ?


On Oct 28, 2009, at 1:20 AM, Mike Bombich wrote:

HFS compression can be preserved as long as the relevant xattr(s)  
and flags on those files are preserved.  A compressed file has the  
compressed data in a hidden xattr (com.apple.decmpfs if  4Kb,  
com.apple.ResourceFork if more), and has the UF_COMPRESSED flag set  
(decimal 40).  When rsync encounters a file like this, it should  
ignore the data fork of the file, which will appear to contain  
normal, uncompressed data.  It should also pass a special flag to  
the xattr calls to expose the decmpfs xattrs.


I've already implemented this in rsync (3.0.6), I just hadn't taken  
the time to craft the HFS-compression-specific changes into a  
patch.  I did that this evening and attached it below.  These are  
changes against the 3.0.6 base, plus the crtimes, fileflags, and  
backup-dir-dels patches.  It should work, at minimum, against the  
3.0.6 base plus the fileflags patch (that patch is required).


Let me know if it doesn't work for you, it's entirely possible that  
I overlooked something in the extraction.


Mike

rsync_3.0.6-hfs-compression_20091027.diff


On Oct 27, 2009, at 11:08 PM, Matt McCutchen wrote:


On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:

When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set on the destination and the destination file is not
compressed.

I examined the destination file as described in ars technica   
(with ls
and xattr from a 10.5 Leopard boot), and the compressed data is  
moved

from the resource fork to the data fork, and the extended attributes
'@' are removed from the file.

As far as I know, only ditto in 10.6 can handle HFS+ File
Compression.  (I even tested a 'clone' with disk utility (file copy,
not block), and it also failed (block copy, of course works).


Rsync is just reading and writing files via the filesystem API; it  
has

no access to any of the flags or xattrs used to implement the
compression.

I guess the filesystem doesn't compress new files by default.  If  
it had
an API to request compression, rsync could use that API when  
writing the
destination files.  Unfortunately, the API ditto is using appears  
to be
private to Apple.  See the post from brkirch beginning The first  
thing

that I tried to do on this page:

http://www.macosxhints.com/article.php?story=20090902223042255

So anyone interested in making rsync compress the destination files
would probably have to copy the relevant code from afsctool.  This  
could
be shared as a patch; I feel quite sure it would not be adopted in  
the

main version of rsync.

--
Matt

--
Please use reply-all for most replies to avoid omitting the mailing  
list.

To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html




-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-28 Thread Tony


When rsync 3.0.6 copies files with HFS+ File Compression, the new  
extended attribute decmpfs is not preserved, and the UF_COMPRESSED  
flag is not set on the destination and the destination file is not  
compressed.


I examined the destination file as described in ars technica  (with ls  
and xattr from a 10.5 Leopard boot), and the compressed data is moved  
from the resource fork to the data fork, and the extended attributes  
'@' are removed from the file.


As far as I know, only ditto in 10.6 can handle HFS+ File  
Compression.  (I even tested a 'clone' with disk utility (file copy,  
not block), and it also failed (block copy, of course works).


On Oct 27, 2009, at 7:39 PM, Matt McCutchen wrote:


What kind of special treatment from rsync were you expecting?  I read
http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/3 , and
as far as I can tell, the compression is handled entirely by the
filesystem with no intervention from applications needed.

--
Matt

On Tue, 2009-10-27 at 19:31 -0400, Tony wrote:

Are there any patches (or planned updates) to rsync v3.0.6 to handle
the HFS+ File Compression that Apple introduced with Snow Leopard?


--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-28 Thread Tony


Thanks.  The C code that brkirch provides takes care of a lot of the  
work, so hopefully someone will be able to provide a patch (Its been  
over 15 years since I did any C programing, so unfortunately I won't  
be able to contribute)


On Oct 28, 2009, at 12:08 AM, Matt McCutchen wrote:

Rsync is just reading and writing files via the filesystem API; it has
no access to any of the flags or xattrs used to implement the
compression.

I guess the filesystem doesn't compress new files by default.  If it  
had
an API to request compression, rsync could use that API when writing  
the
destination files.  Unfortunately, the API ditto is using appears to  
be
private to Apple.  See the post from brkirch beginning The first  
thing

that I tried to do on this page:

http://www.macosxhints.com/article.php?story=20090902223042255

So anyone interested in making rsync compress the destination files
would probably have to copy the relevant code from afsctool.  This  
could

be shared as a patch; I feel quite sure it would not be adopted in the
main version of rsync.





On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:

When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set on the destination and the destination file is not
compressed.

I examined the destination file as described in ars technica  (with  
ls

and xattr from a 10.5 Leopard boot), and the compressed data is moved
from the resource fork to the data fork, and the extended attributes
'@' are removed from the file.

As far as I know, only ditto in 10.6 can handle HFS+ File
Compression.  (I even tested a 'clone' with disk utility (file copy,
not block), and it also failed (block copy, of course works).


Rsync is just reading and writing files via the filesystem API; it has
no access to any of the flags or xattrs used to implement the
compression.

I guess the filesystem doesn't compress new files by default.  If it  
had
an API to request compression, rsync could use that API when writing  
the
destination files.  Unfortunately, the API ditto is using appears to  
be
private to Apple.  See the post from brkirch beginning The first  
thing

that I tried to do on this page:

http://www.macosxhints.com/article.php?story=20090902223042255

So anyone interested in making rsync compress the destination files
would probably have to copy the relevant code from afsctool.  This  
could

be shared as a patch; I feel quite sure it would not be adopted in the
main version of rsync.

--
Matt



--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-28 Thread Mike Bombich
HFS compression can be preserved as long as the relevant xattr(s) and flags on those files are preserved. A compressed file has the compressed data in a hidden xattr (com.apple.decmpfs if  4Kb, com.apple.ResourceFork if more), and has the UF_COMPRESSED flag set (decimal 40). When rsync encounters a file like this, it should ignore the data fork of the file, which will appear to contain normal, uncompressed data. It should also pass a special flag to the xattr calls to expose the decmpfs xattrs.I've alreadyimplemented this in rsync(3.0.6), I just hadn't taken the time to craft the HFS-compression-specific changes into a patch. I did that this evening and attached it below. These are changes against the 3.0.6 base, plus the crtimes, fileflags, and backup-dir-dels patches. It should work, at minimum, against the 3.0.6 base plus the fileflags patch (that patch is required).Let me know if it doesn't work for you, it's entirely possible that I overlooked something in the extraction.Mike

rsync_3.0.6-hfs-compression_20091027.diff
Description: Binary data
On Oct 27, 2009, at 11:08 PM, Matt McCutchen wrote:On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:When rsync 3.0.6 copies files with HFS+ File Compression, the new extended attribute decmpfs is not preserved, and the UF_COMPRESSED flag is not set on the destination and the destination file is not compressed.I examined the destination file as described in ars technica (with ls and xattr from a 10.5 Leopard boot), and the compressed data is moved from the resource fork to the data fork, and the extended attributes '@' are removed from the file.As far as I know, only ditto in 10.6 can handle HFS+ File Compression. (I even tested a 'clone' with disk utility (file copy, not block), and it also failed (block copy, of course works).Rsync is just reading and writing files via the filesystem API; it hasno access to any of the flags or xattrs used to implement thecompression.I guess the filesystem doesn't compress new files by default. If it hadan API to request compression, rsync could use that API when writing thedestination files. Unfortunately, the API ditto is using appears to beprivate to Apple. See the post from brkirch beginning "The first thingthat I tried to do" on this page:http://www.macosxhints.com/article.php?story=20090902223042255So anyone interested in making rsync compress the destination fileswould probably have to copy the relevant code from afsctool. This couldbe shared as a patch; I feel quite sure it would not be adopted in themain version of rsync.-- Matt-- Please use reply-all for most replies to avoid omitting the mailing list.To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsyncBefore posting, read: http://www.catb.org/~esr/faqs/smart-questions.html-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-27 Thread Tony
Are there any patches (or planned updates) to rsync v3.0.6 to handle  
the HFS+ File Compression that Apple introduced with Snow Leopard? 
 
--

Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-27 Thread Matt McCutchen
On Tue, 2009-10-27 at 19:31 -0400, Tony wrote:
 Are there any patches (or planned updates) to rsync v3.0.6 to handle  
 the HFS+ File Compression that Apple introduced with Snow Leopard? 

What kind of special treatment from rsync were you expecting?  I read
http://arstechnica.com/apple/reviews/2009/08/mac-os-x-10-6.ars/3 , and
as far as I can tell, the compression is handled entirely by the
filesystem with no intervention from applications needed.

-- 
Matt

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: OS X 10.6 (Snow Leopard) HFS+ File Compression

2009-10-27 Thread Matt McCutchen
On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:
 When rsync 3.0.6 copies files with HFS+ File Compression, the new  
 extended attribute decmpfs is not preserved, and the UF_COMPRESSED  
 flag is not set on the destination and the destination file is not  
 compressed.
 
 I examined the destination file as described in ars technica  (with ls  
 and xattr from a 10.5 Leopard boot), and the compressed data is moved  
 from the resource fork to the data fork, and the extended attributes  
 '@' are removed from the file.
 
 As far as I know, only ditto in 10.6 can handle HFS+ File  
 Compression.  (I even tested a 'clone' with disk utility (file copy,  
 not block), and it also failed (block copy, of course works).

Rsync is just reading and writing files via the filesystem API; it has
no access to any of the flags or xattrs used to implement the
compression.

I guess the filesystem doesn't compress new files by default.  If it had
an API to request compression, rsync could use that API when writing the
destination files.  Unfortunately, the API ditto is using appears to be
private to Apple.  See the post from brkirch beginning The first thing
that I tried to do on this page:

http://www.macosxhints.com/article.php?story=20090902223042255

So anyone interested in making rsync compress the destination files
would probably have to copy the relevant code from afsctool.  This could
be shared as a patch; I feel quite sure it would not be adopted in the
main version of rsync.

-- 
Matt

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-20 Thread Sven Hartrumpf
Mon, 19 Jan 2009 20:05:07 -0500, magawake wrote:

 Using Redhat 4.5; I have been researching this for weeks and all signs
 and wisemen (such as yourself) point to the Holy Grail -- ZFS!

You could try FuseCompress: http://www.miio.net/fusecompress/
The author claims that he improved its speed recently.


pgpfnQ5eFfaCr.pgp
Description: PGP signature
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

file compression on target side

2009-01-19 Thread Mag Gam
Hello All,

I have been using rsync to backup several filesystems by using Mike
Rubel's hard link method
(http://www.mikerubel.org/computers/rsync_snapshots/).

The problem is, I am backing up a lot of ASCII .log, csv, and .txt
files. These files are large and can range anywhere from 1GB to 30GB.
I was wondering if on the target side (the backup side), if I can use
some sort of compression. I am using ext3 filesystem.

Any ideas?

TIA
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Mag Gam
Thanks all.

I figured this was the only solution available. Too bad I am using
Linux and don't think my RAID controller is supported under Solaris.



On Mon, Jan 19, 2009 at 10:41 AM, Kyle Lanclos lanc...@ucolick.org wrote:
 You wrote:
 The problem is, I am backing up a lot of ASCII .log, csv, and .txt
 files. These files are large and can range anywhere from 1GB to 30GB.
 I was wondering if on the target side (the backup side), if I can use
 some sort of compression. I am using ext3 filesystem.

 One could always switch to the ZFS filesystem; compression is but one
 of many good reasons to do so.

 I'm not sure what an equivalent Linux-based solution would be.

 --Kyle

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Ryan Malayter
On Mon, Jan 19, 2009 at 12:33 PM, Ryan Malayter malay...@gmail.com wrote:
 You can switch to a filesystem that supports transparent encrytpion
 (Reiser, ZFS, NTFS, others depending on your OS). Rsync would be
 completely unaware of any file-system level compression in that case.

Oops. I meant transparent compression, not transparent encryption.
-- 
RPM
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Mag Gam
yep.

ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
(pipe dream) or try to switch to Solaris 10 on x86

On Mon, Jan 19, 2009 at 1:34 PM, Ryan Malayter malay...@gmail.com wrote:
 On Mon, Jan 19, 2009 at 12:33 PM, Ryan Malayter malay...@gmail.com wrote:
 You can switch to a filesystem that supports transparent encrytpion
 (Reiser, ZFS, NTFS, others depending on your OS). Rsync would be
 completely unaware of any file-system level compression in that case.

 Oops. I meant transparent compression, not transparent encryption.
 --
 RPM
 --
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Ryan Malayter
You can switch to a filesystem that supports transparent encrytpion
(Reiser, ZFS, NTFS, others depending on your OS). Rsync would be
completely unaware of any file-system level compression in that case.

Or you can use gzip with the --rsyncable option. Not all distributions
of gzip support --rsyncable, as the last officially stable release
of gzip was way back in 2003.

On Mon, Jan 19, 2009 at 9:14 AM, Mag Gam magaw...@gmail.com wrote:
 Hello All,

 I have been using rsync to backup several filesystems by using Mike
 Rubel's hard link method
 (http://www.mikerubel.org/computers/rsync_snapshots/).

 The problem is, I am backing up a lot of ASCII .log, csv, and .txt
 files. These files are large and can range anywhere from 1GB to 30GB.
 I was wondering if on the target side (the backup side), if I can use
 some sort of compression. I am using ext3 filesystem.

 Any ideas?

 TIA
 --
 Please use reply-all for most replies to avoid omitting the mailing list.
 To unsubscribe or change options: 
 https://lists.samba.org/mailman/listinfo/rsync
 Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html




-- 
RPM
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Ryan Malayter
On Mon, Jan 19, 2009 at 2:34 PM, Mag Gam magaw...@gmail.com wrote:

 ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
 (pipe dream) or try to switch to Solaris 10 on x86

There will never be ZFS in the Linux kernel because of license
incompatibilites. The linux answer to ZFS is btrfs, which is still in
development, and not much of an answer in my opinion ;-).

Also, there does not appear to be any stock linux kernel filesystem
that supports transparent compression read/write. SquashFS is
read-only. What Linux distribution are you using? It might bundle a
patch or other filesystems.

I would suggest trying gzip --rsyncable. Compress the files with gzip
--rsyncable at the source, and rsync should be able to find
significant matches (especially for updates of log files).
-- 
RPM
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: file compression on target side

2009-01-19 Thread Mag Gam
Using Redhat 4.5; I have been researching this for weeks and all signs
and wisemen (such as yourself) point to the Holy Grail -- ZFS!

On a side node, brtfs nor ext4 won't help us too much. Strange that
ZFS is being ported to FreeBSD but a license dispute between GPL and
CDDL? I guess GPL isn't all its cracked out to be... (no flame
intended).

Eitherway, thanks for everyone's time and replies.

TIA



On Mon, Jan 19, 2009 at 4:14 PM, Ryan Malayter malay...@gmail.com wrote:
 On Mon, Jan 19, 2009 at 2:34 PM, Mag Gam magaw...@gmail.com wrote:

 ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
 (pipe dream) or try to switch to Solaris 10 on x86

 There will never be ZFS in the Linux kernel because of license
 incompatibilites. The linux answer to ZFS is btrfs, which is still in
 development, and not much of an answer in my opinion ;-).

 Also, there does not appear to be any stock linux kernel filesystem
 that supports transparent compression read/write. SquashFS is
 read-only. What Linux distribution are you using? It might bundle a
 patch or other filesystems.

 I would suggest trying gzip --rsyncable. Compress the files with gzip
 --rsyncable at the source, and rsync should be able to find
 significant matches (especially for updates of log files).
 --
 RPM

-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RSync with File Compression

2005-06-01 Thread Andrew Embury
Hello,

Rsync is great, thanks to all who work on it.  Does anyone have any good
strategies for keeping the backups on the remote side compressed on disk?
I'm under the impression that gzipping the files would not work as they
would not be available to rsync in the uncompressed state for subsequent
backups.  A compressed filesystem would be prefect, but the only
references I could find was for a non-production quality kernel mod for
ext2 (I'm running ext3).  Has anyone else tackled this issue?

Thanks,

Drew
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


re: RSync with File Compression

2005-06-01 Thread Kevin Day



Drew-

There is a build of GZip
 (under Debian I think) that has an rsync
 friendly option. I have posted a patch
 for zlib to this newsgroup with code that
 makes zlib rsync friendly (along with some
 extra optimizations over the GZip implementation).

Take a look in the list
 archives and you'll find the patch.

Cheers!

- K








Original Message




 Hello,Rsync is great, thanks to all
 who work on it. Does anyone have any
 goodstrategies for keeping the backups
 on the remote side compressed on disk?I'm
 under the impression that gzipping the files
 would not work as theywould not be available
 to rsync in the uncompressed state for subsequentbackups.
 A compressed filesystem would be prefect,
 but the onlyreferences I could find was
 for a non-production quality kernel mod forext2
 (I'm running ext3). Has anyone else
 tackled this issue?Thanks,Drew--
 To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsyncBefore
 posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: RSync with File Compression

2005-06-01 Thread Wayne Davison
On Wed, Jun 01, 2005 at 01:20:45PM -0700, Kevin Day wrote:
 There is a build of GZip (under Debian I think) that has an rsync
 friendly option.

This is useful if the files are compressed at the source.  If you want
only the destination side to be compressed, you'll need something beyond
a stock rsync.  One option that I'm aware of is the BackupPC program
which is referenced on the resources page of rsync's website.

 I have posted a patch for zlib to this newsgroup with code that makes
 zlib rsync friendly (along with some extra optimizations over the GZip
 implementation).

The patch is in the patches dir of the rsync distribution under the
name gzip-rsyncable-checksum.diff.

..wayne..
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: File compression

2001-05-30 Thread Dave Dykstra

On Wed, May 30, 2001 at 11:23:56AM -0400, Brian Johnson wrote:
 I'm using rsync to backup my main hard drive to a second hard drive.
 Although it is currently a local hard drive, I intend to switch to backup to
 a remote location.
 
 My question is:
 It seems that rsync can compress the data for transmission, but is there a
 method to leave the files compressed on the backup hard drive.  I know I
 don't want one large zip file or a windows style compressed drive because
 restoring damaged data would be mcuh more difficult (possibly impossible).

No there isn't, although it was talked about a long time ago.  It would
make rsync's timestamp and filesize comparisons tough, although somebody
said that the size information at least is stored in the header of a
gzipped file.

- Dave Dykstra