something in the extraction.
Mike
rsync_3.0.6-hfs-compression_20091027.diff
On Oct 27, 2009, at 11:08 PM, Matt McCutchen wrote:
On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:
When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved
-hfs-compression_20091027.diff
On Oct 27, 2009, at 11:08 PM, Matt McCutchen wrote:
On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:
When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set
When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set on the destination and the destination file is not
compressed.
I examined the destination file as described in ars technica (with ls
files
would probably have to copy the relevant code from afsctool. This
could
be shared as a patch; I feel quite sure it would not be adopted in the
main version of rsync.
On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:
When rsync 3.0.6 copies files with HFS+ File Compression, the new
with HFS+ File Compression, the new extended attribute decmpfs is not preserved, and the UF_COMPRESSED flag is not set on the destination and the destination file is not compressed.I examined the destination file as described in ars technica (with ls and xattr from a 10.5 Leopard boot), and the compressed
Are there any patches (or planned updates) to rsync v3.0.6 to handle
the HFS+ File Compression that Apple introduced with Snow Leopard?
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo
On Tue, 2009-10-27 at 19:31 -0400, Tony wrote:
Are there any patches (or planned updates) to rsync v3.0.6 to handle
the HFS+ File Compression that Apple introduced with Snow Leopard?
What kind of special treatment from rsync were you expecting? I read
http://arstechnica.com/apple/reviews
On Tue, 2009-10-27 at 23:38 -0400, Tony wrote:
When rsync 3.0.6 copies files with HFS+ File Compression, the new
extended attribute decmpfs is not preserved, and the UF_COMPRESSED
flag is not set on the destination and the destination file is not
compressed.
I examined the destination
Mon, 19 Jan 2009 20:05:07 -0500, magawake wrote:
Using Redhat 4.5; I have been researching this for weeks and all signs
and wisemen (such as yourself) point to the Holy Grail -- ZFS!
You could try FuseCompress: http://www.miio.net/fusecompress/
The author claims that he improved its speed
Hello All,
I have been using rsync to backup several filesystems by using Mike
Rubel's hard link method
(http://www.mikerubel.org/computers/rsync_snapshots/).
The problem is, I am backing up a lot of ASCII .log, csv, and .txt
files. These files are large and can range anywhere from 1GB to 30GB.
Thanks all.
I figured this was the only solution available. Too bad I am using
Linux and don't think my RAID controller is supported under Solaris.
On Mon, Jan 19, 2009 at 10:41 AM, Kyle Lanclos lanc...@ucolick.org wrote:
You wrote:
The problem is, I am backing up a lot of ASCII .log, csv,
On Mon, Jan 19, 2009 at 12:33 PM, Ryan Malayter malay...@gmail.com wrote:
You can switch to a filesystem that supports transparent encrytpion
(Reiser, ZFS, NTFS, others depending on your OS). Rsync would be
completely unaware of any file-system level compression in that case.
Oops. I meant
yep.
ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
(pipe dream) or try to switch to Solaris 10 on x86
On Mon, Jan 19, 2009 at 1:34 PM, Ryan Malayter malay...@gmail.com wrote:
On Mon, Jan 19, 2009 at 12:33 PM, Ryan Malayter malay...@gmail.com wrote:
You can switch to a
You can switch to a filesystem that supports transparent encrytpion
(Reiser, ZFS, NTFS, others depending on your OS). Rsync would be
completely unaware of any file-system level compression in that case.
Or you can use gzip with the --rsyncable option. Not all distributions
of gzip support
On Mon, Jan 19, 2009 at 2:34 PM, Mag Gam magaw...@gmail.com wrote:
ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
(pipe dream) or try to switch to Solaris 10 on x86
There will never be ZFS in the Linux kernel because of license
incompatibilites. The linux answer to ZFS is
Using Redhat 4.5; I have been researching this for weeks and all signs
and wisemen (such as yourself) point to the Holy Grail -- ZFS!
On a side node, brtfs nor ext4 won't help us too much. Strange that
ZFS is being ported to FreeBSD but a license dispute between GPL and
CDDL? I guess GPL isn't
Hello,
Rsync is great, thanks to all who work on it. Does anyone have any good
strategies for keeping the backups on the remote side compressed on disk?
I'm under the impression that gzipping the files would not work as they
would not be available to rsync in the uncompressed state for
Drew-
There is a build of GZip
(under Debian I think) that has an rsync
friendly option. I have posted a patch
for zlib to this newsgroup with code that
makes zlib rsync friendly (along with some
extra optimizations over the GZip implementation).
Take a look in the list
archives and
On Wed, Jun 01, 2005 at 01:20:45PM -0700, Kevin Day wrote:
There is a build of GZip (under Debian I think) that has an rsync
friendly option.
This is useful if the files are compressed at the source. If you want
only the destination side to be compressed, you'll need something beyond
a stock
On Wed, May 30, 2001 at 11:23:56AM -0400, Brian Johnson wrote:
I'm using rsync to backup my main hard drive to a second hard drive.
Although it is currently a local hard drive, I intend to switch to backup to
a remote location.
My question is:
It seems that rsync can compress the data for
20 matches
Mail list logo