On Thu, Jul 29, 2010 at 8:31 PM, Matt McCutchen m...@mattmccutchen.net wrote:
If you can arrange for the source directory to appear nonexistent rather
than empty in the event of a disk failure, rsync would exit with code 23
instead of cleaning out the destination. One hacky way to do that
I had a recent disaster scenario with rsync. I was wondering if there
were any suggestions to guard against in the future:
I used to maintain mirror backups of the /home dir on our
production_server using rsync to a backup_server.
The primary server had a rsyncd daemon running and the
On Thu, Jul 29, 2010 at 7:57 PM, David Overton da...@overtons.id.au wrote:
Hi Rahul,
I have two suggestions:
1. Use --max-delete option to limit the amount of damage that can be
done when this happens.
That's a good option. Another way of course is for me to have some
kind of pre-script bash
On Wed, Jul 28, 2010 at 3:31 AM, Paul Slootman paul+rs...@wurtel.net wrote:
You're off by 1000. It's 1.19MB/s (1.13MiB/s).
Check your calculator :-)
Ok, I'm signing up for math101 again. :) Sorry!
--
Rahul
--
Please use reply-all for most replies to avoid omitting the mailing list.
To
Is there a way to know from the rsync logs how long it took to do a backup?
The only timing info,. I see is this at the end:
sent 3067328 bytes received 7853035429 bytes 1187888.83 bytes/sec
total size is 1559866450336 speedup is 198.55
Can I use it to figure out how long the operation took?
On Wed, Sep 30, 2009 at 4:41 AM, Paul Slootman paul+rs...@wurtel.net wrote:
On Tue 29 Sep 2009, Rahul Nabar wrote:
Is there a way to set rsync excludes by filesize? I already do
Did you look in the manpage?
man rsync, search for size and the 5th hit describes it.
Thanks Paul! I deserve
Is there a way to set rsync excludes by filesize? I already do
excludes by a wild card on the name but I needed something a bit more
sophisticated this time around.
I have files with the extension .nc (netcdf files) which tend to be
huge before the users post-process them. I want all files with a
I am using rsync to keep a mirror of my 800GB /home (server1). The
backup machine is a separate server (servr2). Currently I am running
rsync daemon on server2 and invoking rsync daily via cron on server2.
Are there design / performance considerations that influence which
machine is made the
While trying to find a strategy to avoid duplicating renamed files and
folders I came across threads about a dtetect-renamed option. This
was a patch back in 2006 but I cannot find much about it later. Was
this aborted? Are there any other ways of smart-ly detecting renames?
--
Rahul
--
Please
On Tue, Jun 9, 2009 at 12:41 AM, Daniel.Lidaniel...@usish.com wrote:
b) Moniter CPU and mem usage during backup;
Yes, you are right. CPU% as given by ps aux seems always very low. Only a few %.
I think you can test the system to find out what is the bottleneck.
Trying that now.
I run a
On Thu, Jun 4, 2009 at 12:09 AM, Daniel.Lidaniel...@usish.com wrote:
I think multi-client can improve performance, but limit is the same as
above.
I don't think I understand! Can I start more than one rsync --daemon
instances then?
What kind of performance you are concerning,
On Mon, Jun 8, 2009 at 2:55 PM, Leen Besselinkl...@consolejunky.net wrote:
I think Daniel was trying to say, cpu might not be the bottleneck, it could
be network or something else (I doubt it's memory rsync 3 pretty much solved
that for most) and if you ask me it's probably disk.
Ah! I see.
12 matches
Mail list logo