I am in a situation where my destination has a different owner, file
creation time and permissions. But the file content is the exactly the
same. I am using --ignore-times, and -size-only. It works. However, is
it possible to get rsync to change the ownership and time of the file
and even owner (I
bump
On Tue, Jan 11, 2011 at 8:52 PM, Mag Gam magaw...@gmail.com wrote:
Hello All,
I am trying to sync 2 directories.
src/year/month/day/fileA.csv
src/year/month/day/fileB.csv
src/year/month/day/fileC.csv
..
src/year/month/day/fileZ.csv
I would to sync only file{B,D,T}.csv to my target
Hello All,
I am trying to sync 2 directories.
src/year/month/day/fileA.csv
src/year/month/day/fileB.csv
src/year/month/day/fileC.csv
..
src/year/month/day/fileZ.csv
I would to sync only file{B,D,T}.csv to my target directory so it
would look like this.
tgt/year/month/day/fileB.csv
Currently, I sync our Unix filesystem with hdfs with provided hdfs
tools. I was wondering if anyone used rsync to accomplish this.
TIA
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Is it possible to sleep 1 second after each file is rsynced?
Ofcourse, I can put this in a for loop and do a sleep after each file
is done, I was wondering if there was anything native in rsync for
this type of operation.
TIA
--
Please use reply-all for most replies to avoid omitting the
not help, regarding i/o.
Care to elaborate?
It wasn't obvious whether Mag Gam was concerned about
bandwidth usage or process usage and seeing that Benjamin
already provided the --bwlimit hint I thought I would add
a process friendly clue as well.
The real I/O gets done outside the rsync
I know rsync can do many things but I was wondering if anyone is using
it for data deduplication on a large filesystem. I have a filesystem
which is about 2TB and I want to make sure I don't have the same data
in a different place of a filesystem. Is there an algorithm for that?
TIA
--
Please
Thanks
On Tue, May 25, 2010 at 1:26 PM, Benjamin Watkins
ben-l...@constant-technologies.com wrote:
On 5/25/2010 6:41 AM, Mag Gam wrote:
I know rsync can do many things but I was wondering if anyone is using
it for data deduplication on a large filesystem. I have a filesystem
which is about
I am trying to rsync some files from ext3 to fat32 pen drive. What is
the correct way to do this?
I am currently using, --progress -av --no-o --no-g --exclude '*iso' /ext3 /fat32
Are there any other options I should consider?
TIA
--
Please use reply-all for most replies to avoid omitting the
McCutchenm...@mattmccutchen.net wrote:
On Thu, 2009-08-27 at 22:57 -0400, Mag Gam wrote:
Is it possible to stream the content of a file using rsync to stdout
instead of placing it into a file?
No. Consider rdiff, which lets you call each of the three steps of the
delta-transfer algorithm from
ofcourse, but I really don't want to copy the file to destination. I
would like to direct it to a buffer or a pipe. Is that possible?
On Fri, Aug 28, 2009 at 2:49 AM, Simon Powellsi...@tranmeremail.org.uk wrote:
You could just cat it?
On 28 Aug 2009, at 03:57, Mag Gam wrote:
Is it possible
Is it possible to stream the content of a file using rsync to stdout
instead of placing it into a file?
--
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read:
Hello all,
Is it possible to compile rsync to be self contained. Meaning, I want
to have 1 binary which will have all its libraries compiled into it
and ofcourse rsync in it. I want to have a self contained version of
rsync.
TIA
--
Please use reply-all for most replies to avoid omitting the
Using inotify with rsync is a great idea.
If one has a job that runs daily to get differences on a very large
filesytem with very small files, then can do this (assuming the
initial copy is already completed):
inotify watch source filesystem (or tree)
take down all the notices in a txt file
it works. But takes hours to do it. Was wondering if there was a faster way
On Fri, Feb 27, 2009 at 8:04 AM, Paul Slootman paul+rs...@wurtel.net wrote:
On Fri 27 Feb 2009, Mag Gam wrote:
I have to rsync 200k files which range in size from 5kb to 800kb. Is
there an optimal way to do this using
hi,
I am trying to rsync a very large filesystem which is about 3TB, but
naturally I want to exclude a lot of things. However, I am really
struggling with excluding directories.
SRC=/dasd/december/2008 #Notice there is no trailing slash
TARGET=/backup/december/2008 #Notice there is no trailing
this.
On Sun, Jan 25, 2009 at 11:03 AM, Matt McCutchen m...@mattmccutchen.net wrote:
On Sun, 2009-01-25 at 10:29 -0500, Mag Gam wrote:
I am trying to rsync a very large filesystem which is about 3TB, but
naturally I want to exclude a lot of things. However, I am really
struggling with excluding
Hello All,
I have been using rsync to backup several filesystems by using Mike
Rubel's hard link method
(http://www.mikerubel.org/computers/rsync_snapshots/).
The problem is, I am backing up a lot of ASCII .log, csv, and .txt
files. These files are large and can range anywhere from 1GB to 30GB.
Thanks all.
I figured this was the only solution available. Too bad I am using
Linux and don't think my RAID controller is supported under Solaris.
On Mon, Jan 19, 2009 at 10:41 AM, Kyle Lanclos lanc...@ucolick.org wrote:
You wrote:
The problem is, I am backing up a lot of ASCII .log, csv,
yep.
ZFS on fuse is just too slow. I suppose I will wait for ZFS on Linux
(pipe dream) or try to switch to Solaris 10 on x86
On Mon, Jan 19, 2009 at 1:34 PM, Ryan Malayter malay...@gmail.com wrote:
On Mon, Jan 19, 2009 at 12:33 PM, Ryan Malayter malay...@gmail.com wrote:
You can switch to a
all its cracked out to be... (no flame
intended).
Eitherway, thanks for everyone's time and replies.
TIA
On Mon, Jan 19, 2009 at 4:14 PM, Ryan Malayter malay...@gmail.com wrote:
On Mon, Jan 19, 2009 at 2:34 PM, Mag Gam magaw...@gmail.com wrote:
ZFS on fuse is just too slow. I suppose I
Thanks for the fast response Vitorio.
Do you happen to have a simple example? I been trying to look thru
google but unsuccessful.
On Wed, Nov 26, 2008 at 8:11 AM, Mac User FR [EMAIL PROTECTED] wrote:
Mag Gam a écrit :
Is it possible to implement snapshots schema without NFS or filesystem
Thanks Matt. I suppose I could use rsync to know how big a directory
is then...right?
rsync --progress -avzL -n /source /foo
That should give me the total number of bytes to transfer
On Wed, Oct 15, 2008 at 3:03 PM, Matt McCutchen [EMAIL PROTECTED] wrote:
On Tue, 2008-10-14 at 07:31 -0400, Mag
Thanks.
Does rsync use stat()? does find use stat() when running with printf?
I think the stat() is the most expensive part.
On Tue, Oct 14, 2008 at 12:30 AM, Matt McCutchen [EMAIL PROTECTED] wrote:
On Tue, 2008-10-14 at 00:28 -0400, Mag Gam wrote:
Great. Thanks matt. I was using the find
Would it be more efficient to use rsync to get filestats instead of
using the 'find' command? I would like to know how big a directory is
on a filesystem, but this directory has millions of small files. I was
wondering if rsync would be more efficient than find when using -n
options.
TIA
--
at 22:50 -0400, Mag Gam wrote:
Would it be more efficient to use rsync to get filestats instead of
using the 'find' command? I would like to know how big a directory is
on a filesystem, but this directory has millions of small files. I was
wondering if rsync would be more efficient than find when
At our lab we have storage with many small files. For example a
directory can contain over 15,000 files and each file averages about
75k. I would like to sync this to another filesystem on a different
server but I am not sure if there is a rsync tuning flag I can use for
such a intensive job. I am
27 matches
Mail list logo