Alternatively, you can just install your freshly compiled version of
rsync by issuing the following command :
sudo make install
Then when you want to use this version of rsync issue the following
command :
/usr/local/bin/rsync
This way you will not have any issues with system updates. In
I'ts a simplistic backup using rsync from a Windows 2k3 machine to a Linux
box out on the 'net, using pre-shared ssh keys. In this case I'm saving,
files belonging to various users, using the administrator account on the
windows machine and the destination is a normal user account and its
Am using rsync to mirror all mailboxes to a backup server.
I have configured rsync to run every 1 min. we have around 50 mailboxws for
now.
You may want to consider that (depending on the systems) the rsync process may
take more than 1 minute to finish.
Rayn had a good suggestion
The fist backup is done to an empty destination. Than, each following backup
will be done to a freshly created empty dir with current date as name but
with --link-dest option pointing at the previous (latest) backup directory.
LBackup also allows you quickly setup this kind of rotating hard
Rsync would then need an extra option to enable it to report such
deleted files, because I sure don't need that nor do I need the extra
overhead that encurs.
I agree.
If deletion reporting is to become part of rsync (when invoked with the dest
option --link-dest option), then I would
Although there is a small version difference in the rsync versions, I would
say that this is caused by the cygwin layer.
When using rsync over SSH you may find that the CPU load is substantial
depending upon multiple factors including but not limited to your processor,
disks and network.
Maybe you are getting an I/O error. By default, rsync skips all
deletion of destination files when it gets an I/O error on the source,
to avoid erroneous deletion of destination files if the I/O error caused
them to be omitted from the file list. (This is a very blunt measure
and making it
I am wondering if there is an option which is similar to --progress that will
not display the file names. A percentage indicator would be fine.
There is some related discussion at the following URL :
http://lists.samba.org/archive/rsync/2004-March/008998.html
Any replies regarding ways to
I am wondering if there is an option which is similar to --progress
that will not display the file names. A percentage indicator would be
fine.
See https://bugzilla.samba.org/show_bug.cgi?id=3784 .
Thanks.
--
Please use reply-all for most replies to avoid omitting the mailing list.
To
The setup...
Transport is via SSH.
BoxA : Ubuntu 8.10
BoxB : Mac OS X 10.4.11
If I use rsync v2 to transfer data from BoxA to BoxB everything works fine.
However, if I use rsync v3 to transfer the data, the following error is
reported :
rsync: mkstemp
Is there a way to have rsync not treat files that have gone
missing (e.g. Maildir messages that have been removed) as errors
while indicating I/O errors and such?
http://www.samba.org/rsync/FAQ.html#10
-
This email is
Basically: the problem occurs backing up a directory to a local mounted
network volume.
If you are using the -a flag then you may have problems preserving the
permissions of files on the mounted network directory. This will greatly depend
upon you or your users setup. Unfortunately, I do
I am a developer on the LBackup project.
An LBackup user recently posted a question to the mailing list asking about the
following error.
rsync: unpack_smb_acl: sys_acl_get_info(): Unknown error: 0 (0)
Link to thread :
I have further information regarding the version and the way rsync on these
systems was compiled. Installation and compilation was performed by using the
following instructions : http://www.bombich.com/mactips/rsync.html
I have reproduced this issue (no file path reported) with rsync 3.0.7.
Description of problem :
On a Mac OS X system when rsync attempts to preserve ACL's for a user who is
not on the system for a file system object, there is no report of which
directories/file(s) were unable to have the ACL's preserved.
Procedure to reproduce the
An LBackup user recently posted a question to the mailing list asking about
the following error.
rsync: unpack_smb_acl: sys_acl_get_info(): Unknown error: 0 (0)
Is it possible to place a feature request to have the error reported
by rsync also list the path to the problematic file /
I want to sync /etc /home and /usr/local (and all files/dirs beneath) to
REMOTEHOST:/dest
There are some huge files in /home which I want to exclued
I have set up an include/exclude file but I still get too much files. Also
the excluded file is synced
/usr/bin/rsync -av
I seem to have been successful in copying files from one of the domains to
the other using the following:
rsync -auvn /path/to/source/directory /path/to/destination/directory
I can only assume everything new/changed was copied as I now need the mysql
database copied from one domain to the
I seem to have been successful in copying files from one of the domains to
the other using the following:
rsync -auvn /path/to/source/directory /path/to/destination/directory
I can only assume everything new/changed was copied as I now need the mysql
database copied from one domain to the
I am involved with the development of LBackup http://www.lbackup.org
Recently there was a post to the lbackup-dicussion mailing list.
http://tinyurl.com/lbackup-discussion-device-not
Within the post they reported the following error:
rsync: read
As you might have guessed, that error is coming from the filesystem and
is not rsync's fault; copying the file with cp should have similar
chances of incurring the error. The symbolic name for errno 6 is ENXIO,
and the Mac OS X read(2) man page documents two cases in which it
occurs:
I'm really confused with all the examples out there and all different types
of incremental backups. I tried several scripts but cannot reduce the size of
my backup folders. What I want is to backup my documents to my external drive
every month and save as much disk space as possible.
I have a volume at 192.168.0.2 on my local network. I'd like to rsync the
entire volume to a backup volume, skip all files already present on the
backup volume (same name and the same or earlier timestamp)
Firstly, I would recommend that on both machines you compile and install a copy
of the
same name and the same or earlier timestamp
Would you please provide some clarification on your desired result in the
following situation.
file_a : /Volumes/src/file.txt
file_b : /Volumes/dst/file.txt
If file_b has a newer modification time than file_a will you want file_a to
overweight
Right. That's why I need rsync. I'm really not looking for a utility, but a
command line program which will do incremental backup, and rsync is the only
one I'm aware of that does that in Mac OS.
There are also some other options available. Although, I have not tested them
on Mac OS X 10.6.
Sorry, I'm only going to be doing this a single time. I don't really need a
script, just a simple command
I urge to to read this great write up Cloning Mac OS X disks by Mike Bombich
: http://www.bombich.com/mactips/image.html
If you have access to a test computer system running Mac OS
Sorry, I'm only going to be doing this a single time. I don't really need a
script, just a simple command which will preserve all the flags.
Also, another question, what kind of data is on this volume that you are
backing up?
--
This email is protected by
[What I am trying to achieve is to backup (part of) my home directory to a
backup server such that
a) I have a reasonably up to date (within a day if I do it overnight) of my
current state
b) If I have deleted or updated a file the old version of it gets placed into
a special snapshot
Reversing --backup logic:
Currently, if --backup is used (and --backup-dir), a copy of existing
file that is replaced on rsync will be placed there. Is there a way to
keep original copy (ie. base) the same, but just place whatever has
changed to a different location? Taking backup to a
Aaah, thanks for the insight. RSYNC must transfer the file regardless of the
link count, but it also takes note of the missing link. So, it probably
considers an inode with multiple links resolved only after it finds a
brother/sister link and deletes the duplicate.
You may also be
I really want to put the logic in the script so it is easy to bring another
backup location online easily.
If you have shell access to the destination system from your backup script then
one option may be to issue 'mkdir -p' via ssh.
Creating the directories manually on the destination
Below is a link to a script (which currently only supporting Mac OS X) which
will synchronize a sparse bundle image to a remote server.
Sorry I forgot the link in the previous email.
- http://www.lbackup.org/synchronizing_disk_images_between_machines
Finally, yes I would be very
I did think about remotely executing a mkdir before the backup, but one
blocker is that I will be using Thecus NAS boxes as some off-site locations
and I don't have shell access.
You could mkdir the directory locally somewhere (anywhere), and rsync
just that directory to the remote side,
interestingly, i tried to see if something was wrong with my statments by
doing:
mkdir ~/rsynctest/dir1
mkdir ~/rsynctest/dir2
mkdir ~/rsynctest/dir3
nano ~/rsynctest/dir1/file1 (wrote the line hello world and saved)
nano ~/rsynctest/dir1/file2 (wrote the line hello and saved)
cp
Can I just list them one after the other like so in crontab -e
You may wish to also consider using a single rsync command :
rsync -auv --delete \
/Godfather/Documents \
/Godfather/Setups \
/Godfather/Pictures \
/Godfather/Backups \
/Godfather/Videos \
/Backup # backup destination
With this
the -i shows that most are being copied due to time differences, so in
theory -t should work? This does in fact work on a little test setup
on my work laptop, i will test it properly when i get home tonight.
I am glad that the -t option is working with your test.
If you decide to use
Are you specifying this lines in an excludes file?
yes
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Check file size makes sense, but how rsync checks times? If a file is
copied from one side to another remote side, the time will be
different, right?
Sorry wrong question, the copy file should be able to reserve the mtime.
If the rsync --times option is used then rsync will attempt to
Is there a way to know from the rsync logs how long it took to do a backup?
The only timing info,. I see is this at the end:
sent 3067328 bytes received 7853035429 bytes 1187888.83 bytes/sec
total size is 1559866450336 speedup is 198.55
The latest alpha release of LBackup supports new
readlink_stat(/home/morgan/.gvfs)
failed: Permission denied (13)
I believe the error above is a problem with the rsync user not being able to
read the file
/home/morgan/.gvfs. I suggest that you check the permissions on this file and
also check that the sudo command is in fact granting
Hey all, I'm writing a tool:
http://www.subspacefield.org/security/hdb/
This is an interesting approach. I look forward to seeing the project develop.
Thanks for the link.
--
This email is protected by LBackup
http://www.lbackup.org
--
Please use
I am using rsync to backup files. Is it possible to specify an option
not to delete files from backup directory if the files are deleted
from source? In that way, I can always keep something I may or may not
need in the backup disk, but remove it from a work computer.
As Alex mentioned the
On another note I'm getting an error:
copyfile(.._..v6AxMh,./., COPYFILE_UNPACK) failed:13
When trying to run with -E.
Upstream rsync has no such error message. If you're using the
Apple-modified rsync, we don't support it here.
You may wish to compile a version of rsync from
I know that there are external ways to do this, but I thought it would be
fastest and best as an rsync feature. I've been using rsync for a while now in
some fairly advanced backup schemes and I'm extremely impressed by it. I was
thinking it would be quite helpful if there were an option to
I have had reports of problems with the -R option on OSX 10.6.4.
Just tested it myself and found this odd result:
When I run this
dtruss -f path/to/rsync -aHAXNR --fileflags --force-change
--protect-decmpfs --stats -v /Users/astrid/Documents/main.m
/Users/astrid/Desktop/rrr
it
well, when I tell rsync to compare my home directory with itself, it reports
many differences.
I'm using rsync version 3.0.7 protocol version 30.
I was hoping to use it to verify my backup.
What kind of differences are reported.
Also are you using the the --check-sum option for
I think an important question is what kind of differences do you need to check
with regards integrity of your copy / backup?
In addition, I suggest that you actually use rsync to copy the data to a
separate directory and then compare two different directories. The reason I
suggest this is
We use rsync to copy files and directories from one server to the other. What
options should I give to rsync so that it only copies the modified files? For
example server1 may contain a dir which contains just one file that has been
modified, how do I get rsync to copy just that one file
I have a stand alone system with two drives / and /home on one and /var
on the the other. I'd like to backup the complete system to a usb drive.
I've tried dd however since it copies over everything, even empty space
it didn't seem very practical since for instance root only has 10GiB out
I posted this to a couple online forums already am am already doubting anyone
will be able to directly help me solve my problems. So I am here to query
the experts directly:
I have a number of rsync clients trying to connect to an rsync server
routinely, and they're intermittently
I'm still struggling to get just the directory(ies) that I want...
You may find this http://tinyurl.com/rsync-exclude-all-include-some post to
the LBackup mailing list helpful.
The example listed (link above) revolves around specifying the root directory
as the source and then specifying a
I'm looking for a way to deliberately copy a large directory tree
of files somewhat slowly, rather than as fast as the hardware
will allow.
Just do it to localhost - that way it's still a network connection, and
you can use --bwlimit. Also, you could try nice to lower the
priority rsync
I'm still struggling to get just the directory(ies) that I want...
You may find this http://tinyurl.com/rsync-exclude-all-include-some post to
the LBackup mailing list helpful.
The example listed (link above) revolves around specifying the root directory
as the source and then
Doing nightly backups from cron is an *extremely* common use-case of rsync,
and
telling each and every user of rsync that they should either A) write their
own
shell script to filter and discard error messages, or B) play a constant game
of whack-a-mole chasing down new --exclude options
Hello,
Recording the output (standard error and standard out) to a log file will help
with investigating this issue if it is reproducible. Perhaps another person on
this list will provide a better suggestion.
In addition, perhaps adding some additional debugging information before and
after
I have been using rsync on my laptop for some time to safeguard an
extensive image collection.
My usage has been: .. rsync -av --delete ~/Pictures /media/disk
I have found this exceptionally fast ... just as advertised!
I now have set up a new computer system where I want to back up my
I want to make a full disk image backup of my disk with rsnapshot/rsync that
I can restore on a new disk.
Part of my /etc/rsnapshot.conf looks like follows:
exclude /proc
exclude lost+found
exclude /media
exclude /sys
exclude /dev
exclude /tmp
exclude /dev
backup
I'm using rsync 3.0.7 on Mac OS X 10.6, compiled according to Mike
Bombich's instructions at http://www.bombich.com/rsync.html. Rsync
repeatedly exits with a protocol data stream error when trying to copy
some com.apple.FinderInfo extended attributes. While testing this issue,
I found that
I am involved with the development of lbackup. This message to the rsync
mailing list is related to the following thread on the lbackup-disccussion
mailing list : http://tinyurl.com/lbackup-discussion-diskfull
Essentially, I am curious to if any one using rsync 3.0.7 on Mac OS (10.6)
Server
If pushing data (e.g. a local copy or copy from local to remote), there is a
failure where the receiver can try to report an error, die, and the sender
gets the error trying to write to the receiver before it gets the error
message from around the horn (it would have gone to the generator,
I am using rsync version 3.0.7 on an arm linux based embedded device. The
device pulls data periodically from a rsync server and stores the files on an
SD card. The partial, temp and final rsync destinations all reside on the SD
card.
I came across an issue where it seems that the rsync
+2! :)
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
On UNIX, I am executing an rsync command, from within a script. The command
goes something like this:
/usr/bin/rsync --verbose --progress --stats --compress --recursive --times
--perms --links --safe-links source_dir/
user@target_machine:/parent_path/source_dir
In other words, I
I'm using rsync to do an incremental backup of my desktop here, to a
remote server as follows:
#/usr/bin/bash
old=$(date -d 'now - 1 week' +%Y-%m-%d)
new=$(date +%Y-%m-%d)
rsync -avP --delete --link-dest=../$dir /home/bakers
bak...@perturb.org:/home/bakers/backup/$new/
This is
Hi Chris,
https://github.com/StarsoftAnalysis/brandysnap
I am involved with the LBackup project. Would you be okay with a link being
generated to the Brandyon github page from the LBackup alternatives
http://www.lbackup.org/alternatives page?
Depending upon the license you release Brandysnap
Hi Chris,
https://github.com/StarsoftAnalysis/brandysnap
I am involved with the LBackup project. Would you be okay with a link being
generated to the Brandy on github page from the LBackup
alternativeshttp://www.lbackup.org/alternatives page?
Yes, that would be great.
Done.
Which operating system are you running on the system which currently has
your personal documents?
Ubuntu/Debian
Do you know if there is a virtual file system encryption system which breaks
the data into bands? I know that Mac OS X has support for breaking an encrypted
disk image into
In general - --link-dest works as expected *only*, if the destination
directory
is empty.
Going along with this idea of ensuring the destination directory is empty.
LBackup is a rsync backup wrapper system which may be of assistance in this
regard.
---
Just to clarify, Is the destination file system NTFS?
Also, is this storage directly attached or is it mounted via the network?
Finally, you mentioned it is mounted read-only would you please clarify why the
drive you are writing to is mounted in this way?
Thanks.
Henri
I'd like to use rsync as an efficient (== do not store the same file twice at
the backup media) backup solution. The backup should be made into N remote
directories (rotating each day) _without_ the need to delete the remote
directory before.
This is essentially, what LBackup is doing.
It sounds like you missed the point of Kevin's message (in the other fork of
this thread). The point wasn't to use
`du`, it was that you can run your stats against the backed-up files, not
the source. Then you're only running stats
against the results of running the backup using the
SHA1
You may also find fingerprint to be a useful tool. However, it will require
ruby.
---
This email is protected by LBackup
http://www.lbackup.org
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options:
Hello,
Is your output on this problematic network similar to the following :
$ telnet mirrors.usc.edu 873
Trying 68.181.195.4...
Connected to hpc-mirror.usc.edu.
Escape character is '^]'.
@RSYNCD: 30.0
Also, what happens if you add -n (--dry-run) to the following command :
rsync -artv
Hello,
It seems that the issue is todo with the network. However, I would suggest you
try with a different computer system to be sure. Additional notes follow :
--
server: iperf -s -D -p 873
(problematic network) client: iperf -c iperf.server.com -L 873
Hi Pedro,
Mmh, I put uname -a into the cron job and it says:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
Having built and tested 3.0.7 and ready to send it out into production, can
anybody point me to 'best practices' for updating the binary and man pages
and other issues around upgrading from the dodgy v2.6.9 that ships with
late-10.4-thru-10.7?
You could use /usr/local/bin as the install
I downloaded 3.0.9 source + patches, patched in the 2 patches:
patch -p1 patches/fileflags.diff
patch -p1 patches/crtimes.diff
And although running with the appropriate arguments (as in the original
report), the comments still do not get synced!
Help please!
I believe this issue has been
Hi,
I'm trying to make a small script to get rid of Apple's TimeMachine. The aim
is to backup the files of my company.
I setup a MacMini with a lot of storage attached to it. The MacMini connects
every once in a while to our data server (XServe) through SSH and pulls the
files that
I am trying to use rsync to copy files from several origins to a single
destination. I would like to detect when a file gets overwritten because it
is present with the same relative path/name in several origins:
Origin1:
foo/bar.txt
Origin2:
foo/bar.txt
Destination after copy:
- - laptop-to-server: the laptop tends to roam a lot, network conditions
vary from place to place, and the rsync will be initiated manually, so
it is likely to be a `push' operation, the ssh keys will definitely need
to be encrypted and unlocked manually, but even with that security, I
want
On 12-01-20 06:01 PM, Kevin Korb wrote:
Someone has requested it:
https://bugzilla.samba.org/show_bug.cgi?id=7870
I'm not really sure that is the same bug. Maybe it is. Not convinced
though. I guess I can file my own bug and ask 7870's OP to see if it's
the same issue.
But is seems
rsync -Ha --link-dest=/media/4tb/bak/panic-2012-01-01
/media/2tb/bak/panic-2012-02-01 /media/4tb/bak/
It seems that you may be attempting to hard-link between two file systems. One
is mounted as /media/4tb the other is mounted as media/2tb.
My understanding is that it is not normally
I'm also hit by this issue. Is there any prediction as to whether and, if so,
when something is going to happen? :)
Until such time as the situation with --link-dest changes consider syncing into
an empty directory (if at all possible).
If you are using rsync for backup then you may wish
Wow! Thanks for making it so easy. I will try that asap.
If you do not have any luck with the patched version of rsync there are various
projects which spring to mind which offer this kind of functionality.
However, I would suggest that rsync is the most stable project I have ever seen
Push is when you run your backup program (rsync and whatever script)
on the machine being backed up and you push/upload your data to the
backup system.
Pull is when you run your backup program on the backup system and
pull/download the data from the machine being backed up.
You may also
You may be interested in having a look at LBackup http://www.lbackup.org, an
open source (released under the GNU GPL) backup system.
Essentially, LBackup is a wrapper for rsync. If you are working on your own
script. Feel free to look at how LBackup works (primely written in bash at
present)
Hello,
I was unable to reproduce this issue on an OS X system with rsync 3.0.9
(compiled from source) or 2.6.9 (default on 10.8.2). Output from the testing I
performed is below.
snip
╭─henri@mac /tmp/test_rsync_2.6.9_osx/src ‹› 13-04-07 - 0:41:08
╰─$ dd if=/dev/urandom of=source_file
Hello,
Just to clarify is the file(s) only left when using rsync to transfer to a
non-root partition? The reason I ask is that the tests performed (as quoted in
the previous email where carried out on the root '/' partition). I can easily
retry on an another volume (network mount or locally
Use an ssh key instead.
To get started, the following URL offers introductory information :
http://www.lbackup.org/network_backup#creating_ssh_keys_for_testing
Hope this helps.
-
This email is protected by LBackup, an open
The solution is not to refuse to backup any file that is a hard link.
There are legitimate reasons to have hard links and ignoring them
means you aren't backing up everything.
I agree that preserving hard links may be important in some situation. There
are certainly legitimate reasons to
Hello,
One approach is to backup to a disk image on Mac OS X (.sparsebundle) and then
to push or pull the disk image over to your remote GNU/LINUX system (possibly
via rsync. LBackup has a scripting sub-system to handle exactly this kind of
situation. It is not as fancy as the bug fix you
When I try to run a scheduled backup I get the following error:
Error in socket I/O (code 10)
How do I fix this error?
Are you connecting to a remote system to pull or push data? If so are you able
to ping the remote system or connect to that system on the specified port using
telnet?
Anyway, if They care about their data , They use checksumming for storing
their data on disk, do They ? ;) snip silent bitrot on disks _does_ happen
I totally agree. Storage devices fail and if you need to know if the data is
the same then a checksum is your best bet. If you want to do your
Every year or two I get stuck on this same problem involving
excluding.
This reply assumes that you would like to backup specific directories via the
excludes file, which is believe is what you are attempting to accomplish.
Take a look at the information at the following URL :
Hello,
You may want to check out the LBackup source code as a possible starting point
if you are looking to create your own customised backup script :
http://www.lbackup.org/source
Disclaimer : I am involved with the development of LBackup.
Hello,
The --link-dest approach is the way LBackup works and this is also what allows
the lcd command (part of LBackup) to move you back and forth in time while you
remain in the same directory relative directory. This makes finding a
particular data quite easy during a partial restore.
In
Hello,
Guessing the source drive is formatted HFS+ or HFS+J. What is the file system
on the destination drive? I know that rsync is capable of correctly coping over
file and folder names on Mac OS X which contain non-ASCII characters. Hence my
question relating to the destination file system?
by LBackup, an open source backup solution.
http://www.lbackup.org
On 12/09/2014, at 12:40 PM, LuKreme krem...@kreme.com wrote:
On 11 Sep 2014, at 16:22 , Henri Shustak henri.shus...@gmail.com wrote:
Guessing the source drive is formatted HFS+ or HFS+J.
The source drive is a Drobo
When trying to sync my TV folder to a mirror drive, episodes with non-ASCII
characters in them cannot be processed by rsync. Anything I can do about this?
(Q1) What do you mean by a mirror drive? Is this a RAID1 external enclosure or
some sort of softRAID? Or is it just a copy of the TV
Hello again,
Three further questions :
(Q2-1) Have you tried this same setup when booted from another OS (e.g. 10.9)?
(Q2-2) Have you tried this using some other hardware?
(Q2-3) Have you tried coping this specific file over using the Finder (say to
the Desktop) and then using rsync to copy
1 - 100 of 122 matches
Mail list logo