Hello,
at first, I'm a bit surprised this hasn't been requested before - or did
I miss a bit?
When updating an existing file, the receiving rsync (at least in
general) creates a copy of the existing file and does all writing there
before eventually discarding the old file. This approach is fine
Hello,
There's a certain use case where rsync cannot act as bandwidth-
efficient as possible. Therefore I was wondering whether rsync could
be extended (albeit not trivially) to a more content-addressable way
of operation, or whether there's an application around that could
serve me better.
Adam Nielsen wrote...
> I'm wondering whether it is feasible to have an option that will make
> rsync spawn a separate thread to close files it has created, to avoid
> the main process blocking while the destination files are flushed during
> the close operation?
While your scenario resembles a
Wayne Davison wrote...
> On Tue, May 3, 2016 at 1:07 PM, Christoph Biedl <cbi...@gmx.de> wrote:
>
> > + *.git/.git/*
> > - *.git/
> >
>
> >From the man page near the start of the "INCLUDE/EXCLUDE PATTERN RULES"
> section:
>
>
Hello,
Since the very first day I've been using rsync - some 15 years ago -
the filtering rules caused great grieve. Their behaviour is just not
the way I'd expect it be be and as I read the manpage. Usually I end
up with some hand-written recipes, carefully documented,y including all
the
Marian Marinov wrote...
I've been using rsync on some backup servers for years. In 2011 we
had a situation where the FS of the backup server was behaving
strange, even thou there was enough available I/O, the fs(ext4 on
16TB partition with a lot of inodes) was lagging. After much testing
we
Joe wrote...
This is way beyond my level of expertise, but wouldn't something like
ionice help with that?
Although I'm not Marian, probably not. The ionice program does a
reasonable good job when it's about prioritizing read operation. The
context makes me guess it's rather about writing.
francis.montag...@inria.fr wrote...
You can achieve that already with the -F option of rsync. Create a
.rsync-filter file in the directories you want with the following
content:
+ .rsync-filter
- *
Close enough for my needs. Initially I had some trouble getting this
to work, especially
Hi there,
there's an old proposal to exclude a directory and its subdirectories
from being backed up and the like, by placing a file name
CACHEDIR.TAG into it with a certain content, see [*] for details.
rsync lacks support for that and I was wondering why. Unless there are
strong reasons
Ming Zhang wrote...
I wonder if rsync can have an option that when it scan the file system
tree and accumulate to N number of files, it process these files before
scanning further.
I'd suggest to run rsync in each directory, without the --recursive
option. But this should happen within rsync
Paul Slootman wrote...
Can you give the details of whatever gives you that impression?
FWIW I had similar observations in the past but was too lazy to report
it; and now I cannot reproduce it using rsync 2.6.9.
Is there a chance that bwlimit changed its behaviour in the last two
years?
feenster wrote...
Is this possible with Rsync?
Seems like you're looking for the --backup option.
Chri- Had the same question several months ago stoph
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read:
Tim H wrote...
what about scripts running every 30 seconds on each machine,
thats lighter then rsync just to compare..
eg.
Server1
ls -lR /* ~/files1
scp files1 SERVER2:~
Server2
ls -lR /* ~/files2
(do a diff command here on files1 vs. files2)
(if different,
[EMAIL PROTECTED] wrote...
How would you set a cron to run every 30 seconds? Otherwise it could work
for me.
With a start every 30 seconds you're in the high risk an an overrun.
Don't do cron, use a simple shell script with while true; and
sleep 30.
But believe me, this is a bad idea.
Julian Pace Ross wrote...
The idea is that data stored on the remote server would be unreadable to the
people on that side, but can be decrypted when rsyncing back to the local
server in case of data loss.
In that case encyption will have to take place before transmission
anyway. Else you do
Julian Pace Ross wrote...
I'm interested to hear feedback on this, since I was intending to backup a
mySQL database 'on the fly' daily...
If you can afford to shutdown mysqld during the backup, rsync is not a
problem. I do a daily mysqldump(*) and backup these dump files only.
A completely
Christer Edwards wrote...
Is it safe to just rsync [remote]:/ [local];/ ? Would the /dev or
other folders cause issues with this? Would it be safer to implement
a more detailed rsync script excluding certain areas?
If you're using Linux (you didn't show or tell) you should indeed
exclude
Quinn Snyder wrote...
I am aware of rsync
being able to run over SSH, however I need this to be totally automated (SSH
opening, passwords sent and synchronized, files copied, and connections
broken).
If I understand correctly your problem is the interactive ssh
authentification. You can
The recent security update for sudo (DSA 946-1) introduced a significant
change, the environment is cleared much more thoroughly than before. As
one result SSH_CONNECTION is not availble any more.
Due to this my installations stopped working as they use rsh (as
unprivileged user) to log in, sudo
Phil Howard wrote...
I have some very large directories I'd like to syncronize. Total time to
scan through these millions of files is a substantial portion of an hour
or even exceeds it. It just goes slower if these large critical time blocks
have to be done sequentially.
Yeah, same here.
Dan Crosta wrote...
Is there any way to make this work?
What I'm doing: rsync (several options) --delete --backup \
--backup-dir=(absolute path out of $dest) \
$src $dest
and backup-dir gets archived afterwards. In your case you'd rename it
to represent the last days' changes.
Manuel L?pez-Ib??ez wrote...
I vote for this feature. In du and df commands, this is invoked with:
-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)
--si likewise, but use powers of 1000 not 1024
Currently, in rsync, -h is the short form of --help.
Arkadiusz Miskiewicz wrote...
rsync 2.6.6 on both sides. Linux 2.6.10 on receiving side, 2.6.12.6 on
sending side.
Did you activate netfilter on either side, i.e. does iptables -vnL
show non-empty chains?
If yes, does the problem still exist if you clean _all_ chains, even in
the nat and
Hi,
doing a rather usual backup:
| rsync -av --delete /path/to/sender/ /path/to/receiver/
some items might be removed or overwritten on the receiving side.
I'd like to keep a copy of these files/links/whatever. Is there a way to
create a copy of them in another tree right before they are purged?
ches wrote...
What I really want is to tell rsync to desist the mirroring after a
certain amount of time. It's ok with me if it takes a few nights to
bring the mirrors back into alignment.
I can and have done this easily with a shell script that kills the rsync after
a given amount of
Wayne Davison wrote...
On Thu, May 12, 2005 at 03:10:49PM -0300, C. P. wrote:
I think it may be useful to add an option like lograte's prerotate and
postrotate, that triggers server side scripts before and after rsync.
It would help the administrator work.
I don't think I want to
Edwin Eefting wrote...
-What are the opinions of other people on this list?
Sounds like a great idea for me but I'm just an rsync user.
-Would it be easy to implement, or would it give too much trouble?
Without looking into the sources I think it should not be that difficult
to dump the
[EMAIL PROTECTED] wrote...
I'm trying to run rsync in server mode and it appears to start normally,
but it refuses all connections (refuses connection when I tried telnetting
in on localhost 873!).
Did the daemon acutally start?
What does netstat -ln | grep :873 tell?
rsync --daemon
Hello,
As far as I can see there's no option to get a list of files/directories/
whatever that have been transferred. This is not exactly the same as the
--verbose output for two reasons: At first the -v output has been
sanitized to avoid terminal confusion, second this might be hard to
parse
29 matches
Mail list logo