Re: odd behavior on remote

2008-05-08 Thread George Georgalis
On Thu 08 May 2008 at 11:52:08 AM -0400, George Georgalis wrote:
>I've been using rsync for some time (years) to generate
>many hardlink snapshots per day; but I'm seeing an odd
>new problem today.

OOOh, nevermind...

FilesystemSize  Used Avail Capacity  iusedifree  %iused  
Mounted on
/dev/raid0h   312G 312G  -16G   105%  7502943 3339497518%   /ub0

I'm out of disk space ;)

// George


-- 
George Georgalis, information system scientist <
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


odd behavior on remote

2008-05-08 Thread George Georgalis
I've been using rsync for some time (years) to generate
many hardlink snapshots per day; but I'm seeing an odd
new problem today.

the remote/destination host gets a file list from the
source machine via ssh, and begins to write files until
it "hangs". On this run only one file was transferred; on
other runs many screenfulls went across

+ rsync --recursive --links --perms --times --group --owner --devices 
--specials --numeric-ids --protocol=29 --verbose --progress --exclude tmp 
--exclude *.tmp --exclude spool --exclude *.core --exclude *.boot --exclude 
*.filepart --exclude *.lock --exclude *.nobak --exclude .RDATA --exclude /repo 
--exclude /sandbox --exclude /soft/dist 
--link-dest=/ub0/bk/1/2008.04.29.0010.00//center //center/ 
lou.grid://ub0/bk/0/2008.05.08.1051.49//center/
building file list ...
580347 files to consider
comp/GridTest/101v6.csv

the dest host is rsync  version 2.6.9  protocol version 29
and the source host is rsync  version 2.6.9  protocol version 29

in the above run I specified the protocol because I
assumed the src host had a newer version. the src host
has just been setup, but the destination host has been
receiving these snapshots for a while.

After the hang, the destination host seems in perfect
order (no login or disk or observable problems). I see
the ssh connection from source to destination is still
open but the remote rsync pids have all ended with no
indication of error -- there is no rsync in the process
tree at all; below, the rsync shell is/was 26770, now
with no children...

 | | |-+= 03778 root sshd: [EMAIL PROTECTED] 
 | | | \--= 17062 root -ksh 
 | | \--= 26770 root sshd: [EMAIL PROTECTED] 
 | |-+= 00594 root nfsd: master 
 | | |--- 00475 root nfsd: server 
 | | |--- 00601 root nfsd: server 

The only problem I see with the source host is the rsync
command doesn't complete.  What can I check? What is
going on here???

// George


-- 
George Georgalis, information system scientist <
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: using rsync on raw device

2008-03-31 Thread George Georgalis
On Sun, Mar 30, 2008 at 03:20:59PM -0400, Matt McCutchen wrote:
>On Sun, 2008-03-30 at 15:00 -0400, George Georgalis wrote:
>> I'm trying to use rsync to manage a raw disk image file.
>> 
>> rsync --checksum --perms --owner --group --sparse --partial --progress \
>>  192.168.80.189:/dev/rwd0d /u0510a/rwd0d.img
>> skipping non-regular file "rwd0d"
>
>Use the --copy-devices option added by this patch:
>
>http://rsync.samba.org/ftp/rsync/patches/copy-devices.diff

hey this looks like a great patch, Thanks!

I don't have an environment to test it atm
but will give it a whirl when I do.

// George


-- 
George Georgalis, information system scientist <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


using rsync on raw device

2008-03-30 Thread George Georgalis
Hi -- congratulations on the 3.0 release!

I'm trying to use rsync to manage a raw disk image file.

rsync --checksum --perms --owner --group --sparse --partial --progress \
192.168.80.189:/dev/rwd0d /u0510a/rwd0d.img
skipping non-regular file "rwd0d"

sent 20 bytes  received 69 bytes  178.00 bytes/sec
total size is 0  speedup is 0.00


rsync  version 2.6.9  protocol version 29
Copyright (C) 1996-2006 by Andrew Tridgell, Wayne Davison, and others.
<http://rsync.samba.org/>
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
  inplace, IPv6, 64-bit system inums, 64-bit internal inums


The idea being I could use the rsync algorithm to
update /u0510a/rwd0d.img which I could deploy with the
dd comand to /dev/rwd0d on the target host. (a UFS
filesystem btw)

But it's obviously not happy treating a device as a
file.  Is there some way around this, I don't see an
appropriate option in the man page.

Thanks!
// George


-- 
George Georgalis, information system scientist <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: --hard-links performance

2007-07-14 Thread George Georgalis
On Wed, Jul 11, 2007 at 05:47:00PM -0400, [EMAIL PROTECTED] wrote:
>Date: Wed, 11 Jul 2007 01:26:18 -0400
>From: "George Georgalis" <[EMAIL PROTECTED]>
>
>the program is http://www.ka9q.net/code/dupmerge/
>there are 200 lines of well commented C; however
>there may be a bug which allocates too much memory
>(one block per file); so my application runs out. :\
>If you (anyone) can work it out and/or bring it into
>rsync as a new feature, that would be great. Please
>keep the author and myself in the loop!
>
>Do a search for "faster-dupemerge"; you'll find mentions of it in the
>dirvish archives, where I describe how I routinely use it to hardlink
>together filesystems in the half-terabyte-and-above range without
>problems on machines that are fairly low-end these days (a gig of RAM,
>a gig or so of swap, very little of which actually gets used by the
>merge).  Dirvish uses -H in rsync to do most of the heavy lifting, but
>large movements of files from one directory to another between backups
>won't be caught by rsync*.  So I follow dirvish runs with a run of
>faster-dupemerge across the last two snapshots and across every
>machine being backed up (e.g., one single run that includes two
>snapshots per backed-up machine); that not only catches file movements
>within a single machine, but also links together backup files -across-
>machines, which is quite useful when you have several machines which
>share a lot of similar files (e.g., the files in the distribution
>you're running), or if a file moves from one machine to another, etc,
>and saves considerable space on the backup host.  [You can also trade
>off speed for space, e.g., since the return on hardlinking zillions of
>small files is relatively low compared to a few large ones, you can
>also specify "only handle files above 100K" or whatever (or anything
>else you'd like as an argument to "find") and thus considerably speed
>up the run while not losing much in the way of space savings; I
>believe I gave some typical figures in one my posts to the dirvish
>lists.  Also, since faster-dupemerge starts off by sorting the results
>of the "find" by size, you can manually abort it at any point and it
>will have merged the largest files first.]
>
>http://www.furryterror.org/~zblaxell/dupemerge/dupemerge.html is the
>canonical download site, and mentions various other approaches and
>their problems.  (Note that workloads such as mine will also require
>at least a gig of space in some temporary directory that's used by the
>sort program; fortunately, you can specify on the command line where
>that temp directory will be, and it's less than 0.2% of the total
>storage of the filesytem being handled.)
>
>* [Since even fuzzy-match only looks in the current directory, I
>believe, unless later versions can be told to look elsewhere as well
>and I've somehow missed that---if I -have- missed that, it'd be a nice
>addition to be able to specify extra directories (and/or trees) in
>which fuzzy-match should look, although in the limit that might
>require a great deal of temporary space and run slowly.]


Thanks for the notes. I keep ./0 ./1 ./2 ./3 which are incomplete,
sub-day, daily, and weekly hardlink snapshots with a system to
move/purge the timestamp directories between them. I'm planning
to run *some*sort*of*dupmerge*, individually on ./1 ./2 ./3
each time they get updated. this is to address multiple users
downloading the same source etc. ie files not necessarily in
adjacent snapshots but space can be recovered by hardlinking
various weekly snapshots.

I'm working on a feature to preserve status of newer ctime while
linking to older mtime. http://metrg.net/pub/script/dupmerge.sh
because I'm revisiting the system on occasion of a recursive
owner/mode change caused a 15Gb hit. Maybe I'll use or bring in
faster-dupemerge.

is there a way to make rsync apply newer status on older inode,
when only that has changed?

Regards,
// George


-- 
George Georgalis, information systems scientist <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: --hard-links performance

2007-07-10 Thread George Georgalis
On Fri, Jun 22, 2007 at 03:33:31PM -0400, George Georgalis wrote:
>On Tue, Jun 05, 2007 at 11:11:27AM -0700, Chuck Wolber wrote:
>>On Tue, 5 Jun 2007, Paul Slootman wrote:
>>
>>> > In any case, what's the general consensus behind using the 
>>> > --hard-links option on large (100GB and above) images? Does it still 
>>> > use a ton of memory? Or has that situation been alleviated?
>>> 
>>> The size of the filesystem isn't relevant, the number of hard-linked 
>>> files is. It still uses a certain amount of memory for each hard-linked 
>>> file, but the situation is a lot better than with earlier rsync 
>>> versions. (As always, make sure you use the newest version.)
>>
>>In our case, we store images as hardlinks and would like an easy way to 
>>migrate images from one backup server to another. We currently do it with 
>>a script that does a combination of rsync'ing and cp -al. Our layout is 
>>similar to:
>>
>>image_dir
>>| -- img1
>>| -- img2 (~99% hardlinked to img1)
>>| -- img3 (~99% hardlinked to img2)
>>   .
>>   .
>>   .
>>` -- imgN (~99% hardlinked to img(N-1))
>>
>>
>>Each image in image_dir is hundreds of thousands of files. It seems to me 
>>that even a small amount of memory for each hardlinked file is going to 
>>clobber even the most stout of machines (at least by 2007 standards) if I 
>>tried a wholesale rsync of image_dir using --hard-links. No?
>>
>>If so, then is a "hard link rich environment" an assumption that can be 
>>used to make an optimization of some sort?
>
>I had a C program which would scan directory points and on some
>criteria, (I forget exactly, size and mtime?), it would decide to
>unlink one file and link the name to the other. I could look for
>it but no guarantees I'll find it, or soon... it was designed for
>identical files with different names.
>
>you could tar transfer then minimize with the program. of course
>everyone on this list would prefer to use rsync, maybe the
>algorithm could be integrated in? :) maybe I can find the code.
>it was written by a very senior individual...

the program is http://www.ka9q.net/code/dupmerge/
there are 200 lines of well commented C; however
there may be a bug which allocates too much memory
(one block per file); so my application runs out. :\
If you (anyone) can work it out and/or bring it into
rsync as a new feature, that would be great. Please
keep the author and myself in the loop!

// George


-- 
George Georgalis, information systems scientist <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: --hard-links performance

2007-06-22 Thread George Georgalis
On Tue, Jun 05, 2007 at 11:11:27AM -0700, Chuck Wolber wrote:
>On Tue, 5 Jun 2007, Paul Slootman wrote:
>
>> > In any case, what's the general consensus behind using the 
>> > --hard-links option on large (100GB and above) images? Does it still 
>> > use a ton of memory? Or has that situation been alleviated?
>> 
>> The size of the filesystem isn't relevant, the number of hard-linked 
>> files is. It still uses a certain amount of memory for each hard-linked 
>> file, but the situation is a lot better than with earlier rsync 
>> versions. (As always, make sure you use the newest version.)
>
>In our case, we store images as hardlinks and would like an easy way to 
>migrate images from one backup server to another. We currently do it with 
>a script that does a combination of rsync'ing and cp -al. Our layout is 
>similar to:
>
>image_dir
>| -- img1
>| -- img2 (~99% hardlinked to img1)
>| -- img3 (~99% hardlinked to img2)
>   .
>   .
>   .
>` -- imgN (~99% hardlinked to img(N-1))
>
>
>Each image in image_dir is hundreds of thousands of files. It seems to me 
>that even a small amount of memory for each hardlinked file is going to 
>clobber even the most stout of machines (at least by 2007 standards) if I 
>tried a wholesale rsync of image_dir using --hard-links. No?
>
>If so, then is a "hard link rich environment" an assumption that can be 
>used to make an optimization of some sort?

I had a C program which would scan directory points and on some
criteria, (I forget exactly, size and mtime?), it would decide to
unlink one file and link the name to the other. I could look for
it but no guarantees I'll find it, or soon... it was designed for
identical files with different names.

you could tar transfer then minimize with the program. of course
everyone on this list would prefer to use rsync, maybe the
algorithm could be integrated in? :) maybe I can find the code.
it was written by a very senior individual...

// George



-- 
George Georgalis, information systems scientist <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: feature request, hardlink progress......

2007-02-22 Thread George Georgalis
On Thu, Feb 22, 2007 at 05:09:06PM +0100, Paul Slootman wrote:
>On Thu 22 Feb 2007, George Georgalis wrote:
>> >>
>> >>Please upgrade -- hard-link handling is much improved in newer versions.
>> >
>> >Thanks, turns out there are closer to 500,000 files and 89
>> >snapshots (@ ~90% of the files). The process was to install
>
>> > [EMAIL PROTECTED]:/root $ rsync --numeric-ids -avHP source:/data/ /data/
>> >receiving file list ...
>> >ERROR: out of memory in inode_table [receiver]
>> >rsync error: error allocating core memory buffers (code 22) at util.c(115) 
>> >[receiver=2.6.9]
>> 
>> I added more memory, same problem.
>> 
>> any recommendations to get around this? will the old version of
>> rsync use less memory to make all the hardlinks?
>
>Having done something similar recently (copied an archive with daily
>snapshots, unchanged files hardlinked) I can suggest the following:
>
>If your layout is something like this:
>/data/MMDD/ where this is a daily snapshot, you can do:
>
>$ rsync --numeric-ids -avHP source:/data/20060101/ /data/20060101/
>(if the first snapshot is 20060101 of course.)
>
>Then:
>$ cd /data
>$ x=20060101
>$ for i in 200601{02,03,...,31}; do
>>   rsync -avHP --link-dest=../$x source:/data/$i/ /data/$i/
>>   x=$i
>> done
>
>If you ensure that the (empty) snapshot directories are already on your
>new system, then you can replace the 200601{...} with 2006*, although
>you may need to make it skip the first one; I don't know what rsync will
>do with a link-dest that is the same as the target :-)

my snapshots are at 3 hours, daily, and weekly, I weed out as they
get old; I was planning to "find" all my directory names, sort,
and use a scripted approach like yours. but I really wanted an
easier solution with less chance of it not working correctly.

when I set ulimit memory size and wired memory to unlimited, I
made progress, but still failed.

in the end the tar cf - . | tar xpf - method worked, and quickly;
remember that old transport??? I have some broken uid:gid to deal
with but that's par for the course.

cheers,
// George


-- 
George Georgalis, systems architect, administrator <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: feature request, hardlink progress......

2007-02-22 Thread George Georgalis
On Mon, Jan 15, 2007 at 01:39:53AM -0500, George Georgalis wrote:
>On Sat, Jan 13, 2007 at 08:43:55PM -0800, Wayne Davison wrote:
>>On Sat, Jan 13, 2007 at 10:25:42PM -0500, George Georgalis wrote:
>>> hours left.  Would it be straightforward to
>>> include progress when creating hardlinks?
>>
>>Please upgrade -- hard-link handling is much improved in newer versions.
>
>Thanks, turns out there are closer to 500,000 files and 89
>snapshots (@ ~90% of the files). The process was to install
>larger disks on primary host; the files went (push) from reiserfs
>to reiserfs with old versions of rsync (target: rsync version
>2.6.3pre1 protocol version 28; source: "debian 3.1"). Now,
>restoring (pull) from the same old version on the (reiserfs)
>source to ffs2 (netbsd 3.1) with rsync version 2.6.9 protocol
>version 29 (target).
>
> [EMAIL PROTECTED]:/root $ rsync --numeric-ids -avHP source:/data/ /data/
>receiving file list ...
>ERROR: out of memory in inode_table [receiver]
>rsync error: error allocating core memory buffers (code 22) at util.c(115) 
>[receiver=2.6.9]

I added more memory, same problem.

any recommendations to get around this? will the old version of
rsync use less memory to make all the hardlinks?

// George


-- 
George Georgalis, systems architect, administrator <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: feature request, hardlink progress......

2007-01-14 Thread George Georgalis
On Sat, Jan 13, 2007 at 08:43:55PM -0800, Wayne Davison wrote:
>On Sat, Jan 13, 2007 at 10:25:42PM -0500, George Georgalis wrote:
>> hours left.  Would it be straightforward to
>> include progress when creating hardlinks?
>
>Please upgrade -- hard-link handling is much improved in newer versions.

Thanks, turns out there are closer to 500,000 files and 89
snapshots (@ ~90% of the files). The process was to install
larger disks on primary host; the files went (push) from reiserfs
to reiserfs with old versions of rsync (target: rsync version
2.6.3pre1 protocol version 28; source: "debian 3.1"). Now,
restoring (pull) from the same old version on the (reiserfs)
source to ffs2 (netbsd 3.1) with rsync version 2.6.9 protocol
version 29 (target).

 [EMAIL PROTECTED]:/root $ rsync --numeric-ids -avHP source:/data/ /data/
receiving file list ...
ERROR: out of memory in inode_table [receiver]
rsync error: error allocating core memory buffers (code 22) at util.c(115) 
[receiver=2.6.9]

When I tried breaking up the source data into a directory 1/4 the
size, I got the same error.  But I realized I cannot preserve
hardlinks if I breakup the /data/ directory.

There _are_ more memory slots on the target host, is there any way
to adjust the rsync command to use less memory? Would it help to
update the sender rsync? Or do I have to restore each snapshot,
one at a time? :-}

// George


-- 
George Georgalis, systems architect, administrator <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


feature request, hardlink progress......

2007-01-13 Thread George Georgalis
I'm copying a partition that has a bunch of hardlink
based snapshots (-aPH).  I think there's about
250,000 files in each backup and between 100 and 200
snapshots.

Earlier today, I saw the files had completed and it
was making all the hardlinks. I thought it would be
"not long" but it's been making hardlinks for 12
hours (at least).

There's only 36Gb in snapshot, the data didn't take
too long. but now there's no progress indicator and
??? hours left.  Would it be straightforward to
include progress when creating hardlinks?

// George


-- 
George Georgalis, systems architect, administrator <
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: rsync to completely mirror an entire machine

2006-02-06 Thread George Georgalis
On Sat, Feb 04, 2006 at 12:37:35PM -0700, Christer Edwards wrote:
>Is it safe to just rsync [remote]:/ [local];/ ?  Would the /dev or
>other folders cause issues with this?  Would it be safer to implement
>a more detailed rsync script excluding certain areas?

As already mentioned, there are some issues to watch out for
/var/run, /proc, mail spool, and others. You are probably better
off with a reliable imaging & configuration mngt system; and maybe
a hot spare. Key word here is management, rather than rsync what
the os has written, image the system with the os and apply your
current configurations and data from backup or repository; as
required.

// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: warnings on symlinks using link-dest

2006-02-06 Thread George Georgalis
On Sat, Feb 04, 2006 at 01:00:02PM -0800, Wayne Davison wrote:
>On Fri, Feb 03, 2006 at 10:00:27AM -0500, George Georgalis wrote:
>> rsync: symlink 
>> "/sawmill/backup/sawmill/snapshot/2006.02.03..01/sawmill/home/geo/comphome"
>>  -> "/sawmill/comp/home/geo" failed: File exists (17)
>
>That's a strange error that I haven't seen before.  The "File exists"
>part of the message comes from interpreting the errno returned from a
>symlink() call, so that's what the OS is telling rsync is going wrong.
>The call that rsync tried to make is symlink($2, $1), where $2 is the
>second name mentioned in the error (the referent) and $1 is the first
>name mentioned in the error (the symlink to create).  Rsync should
>remove anything that was in the way of the symlink (which you say should
>not be there, due to the directory starting out empty).  When an error
>occurs, ls the first name and see if something is there.  Perhaps there
>is a second rsync running and the two are trying to create symlinks for
>in the same places?

You guessed it! When I brought the interval from daily to @8hr, I
forgot to remove the @24h, so every midnight

#0 */3 * * * /usr/local/script/backup.sh
0 */8 * * *     /usr/local/script/backup.sh
0 0 * * *   /usr/local/script/backup.sh

Thanks!
// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: warnings on symlinks using link-dest

2006-02-04 Thread George Georgalis
On Fri, Feb 03, 2006 at 10:00:27AM -0500, George Georgalis wrote:
>Hi, I'm using rsync with link-dest to make snapshot like backups into
>
>/sawmill/backup/{hostname}/snapshot/{timestamp}/{root}
>
>I'm getting warnings that I don't understand...
>
>On Fri, Feb 03, 2006 at 05:00:01AM -, Cron Daemon wrote:
>>+ rsync --recursive --links --perms --times --group --owner --devices 
>>--numeric-ids --exclude '*.boot' --exclude '*.lock' --exclude '*.nobak' 
>>--exclude tmp --exclude /backup --exclude /distfile --exclude /offline 
>>--exclude /sandbox 
>>--link-dest=/sawmill/backup/sawmill/snapshot/2006.02.02.1600.01//sawmill 
>>/sawmill/ /sawmill/backup/sawmill/snapshot/2006.02.03..01//sawmill/
>>rsync: symlink 
>>"/sawmill/backup/sawmill/snapshot/2006.02.03..01/sawmill/home/geo/comphome"
>> -> "/sawmill/comp/home/geo" failed: File exists (17)
>
>If not apparent, this the the link being backed up
>/sawmill/home/geo/comphome -> /sawmill/comp/home/geo
>
>I get lots of similar warnings, but I'm not sure if they are from
>all symlinks, exactly all the absolute symlinks or some other
>set. What is happening here?

Maybe I should clarify, each snapshot is written to a unique directory...
in this case /sawmill/backup/sawmill/snapshot/2006.02.03..01 so
I'm quite certain ./2006.02.03..01/sawmill/home/geo/comphome
did not exist already, though rsync says it does.

rsync  version 2.6.3  protocol version 28

It's as if the error comes from following the link rather than
copying it... am I missing a required option?

// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


warnings on symlinks using link-dest

2006-02-03 Thread George Georgalis
Hi, I'm using rsync with link-dest to make snapshot like backups into

/sawmill/backup/{hostname}/snapshot/{timestamp}/{root}

I'm getting warnings that I don't understand...

On Fri, Feb 03, 2006 at 05:00:01AM -, Cron Daemon wrote:
>+ rsync --recursive --links --perms --times --group --owner --devices 
>--numeric-ids --exclude '*.boot' --exclude '*.lock' --exclude '*.nobak' 
>--exclude tmp --exclude /backup --exclude /distfile --exclude /offline 
>--exclude /sandbox 
>--link-dest=/sawmill/backup/sawmill/snapshot/2006.02.02.1600.01//sawmill 
>/sawmill/ /sawmill/backup/sawmill/snapshot/2006.02.03..01//sawmill/
>rsync: symlink 
>"/sawmill/backup/sawmill/snapshot/2006.02.03..01/sawmill/home/geo/comphome"
> -> "/sawmill/comp/home/geo" failed: File exists (17)

If not apparent, this the the link being backed up
/sawmill/home/geo/comphome -> /sawmill/comp/home/geo

I get lots of similar warnings, but I'm not sure if they are from
all symlinks, exactly all the absolute symlinks or some other
set. What is happening here?

// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: encrypted destination

2005-08-17 Thread George Georgalis
On Wed, Aug 17, 2005 at 01:12:42AM -0700, Wayne Davison wrote:
>On Mon, Aug 15, 2005 at 11:00:01AM -0400, George Georgalis wrote:
>> A lot of these posts 3 years old, is there plans or reasons not to
>> include [--source-filter / --dest-filter] in the main line code?
>
>That patch opens up a huge security hole in daemon servers, so that
>would have to be handled somehow (perhaps by making those options
>auto-refused).  There are also several bugs/deficiencies that would need
>to be fixed (my updated patch--see below--lists the ones that I saw but
>didn't fix).  Even if all that were done, I'd still be hesitant to add
>these options, though.
>
>I've just updated the several-year-old patch, and committed it into the
>patches dir:
>
>http://rsync.samba.org/ftp/unpacked/rsync/patches/source-filter_dest-filter.diff

Thanks for the update, looks like there is still some issues to resolve,
even for in-house use. The goal still seems pretty desirable though.

Regards,
// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


encrypted destination

2005-08-15 Thread George Georgalis
In the archives I see the question about encrypted destination and it's
mostly answered with the --source-filter / --dest-filter patch by Kyle
Jones. There are also some proposed updates to the patch.

A lot of these posts 3 years old, is there plans or reasons not to
include them in the main line code?

// George


-- 
George Georgalis, systems architect, administrator <
http://galis.org/ cell:646-331-2027 mailto:[EMAIL PROTECTED]
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html