You only log you would like to to mount /backup, but the actual command
is missing. You should also log errors, so something like
/usr/bin/mount /backup >> /var/log/rsyncd.log 2>&1
would be adequate before your line to check what is mounted.
Hope this helps
Hardy
Am 24.09.22 um 15:15 schrob
You aren't logging any stderr. That is where any error messages would
go. Add some 2>&1.
Also, mount has a -v
On 9/24/22 09:15, dotdeb--- via rsync wrote:
I've been using rsync for years to backup my machines both at work and
at home.
These days I faced a new "challenge": at work I connect
Yes, cpio -l can be useful since cpio can easily operate on the output
from the very capable find command.
On 9/4/21 8:34 PM, Dan Stromberg wrote:
>
> I was thinking --link-dest too.
>
> Sometimes this can be done with cpio too; check out the -pdlv options.
>
> On Sat, Sep 4, 2021 at 4:57 PM
I was thinking --link-dest too.
Sometimes this can be done with cpio too; check out the -pdlv options.
On Sat, Sep 4, 2021 at 4:57 PM Kevin Korb via rsync
wrote:
> Rsync does almost everything cp does but since it is designed to network
> it never got that feature. I was thinking maybe
Rsync does almost everything cp does but since it is designed to network
it never got that feature. I was thinking maybe --link-dest could be
tortured into doing it but if it can I can't figure out how. BTW, you
have some pointless dots in there.
On 9/4/21 6:41 PM, L A Walsh via rsync wrote:
>
No problem
On Tue, Mar 10, 2020, 18:05 raf via rsync wrote:
> raf via rsync wrote:
>
> > T. Shandelman via rsync wrote:
> >
> > > Rsync is a remarkably handy tool that I use virtually every day.
> > >
> > > But there is one thing about rsync that drives me totally crazy.
> > >
> > > Under the
raf via rsync wrote:
> T. Shandelman via rsync wrote:
>
> > Rsync is a remarkably handy tool that I use virtually every day.
> >
> > But there is one thing about rsync that drives me totally crazy.
> >
> > Under the -n (dry run) flag, rsync seems to produce exactly the same output
> > as
T. Shandelman via rsync wrote:
> Rsync is a remarkably handy tool that I use virtually every day.
>
> But there is one thing about rsync that drives me totally crazy.
>
> Under the -n (dry run) flag, rsync seems to produce exactly the same output
> as without that flag.
>
> I cannot tell you
If you used -v then the very last line rsync outputs is:
total size is ### speedup is ### (DRY RUN)
--
~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,
Kevin Korb Phone:(407) 252-6853
Systems Administrator Internet:
Yeah, I don't have other symlink.
But I'm thinking of changing my folder structure to reflect the data I
really need on the NAS. So, as a side effect, the special rsync is not
needed any more :)
Anyway, thanks for the answers!
Bye
*___Gionata Boccalini*
2015-06-09 13:25 GMT+02:00
Should be as long as you don't have other symlinks in the tree.
On Mon, Jun 8, 2015, 15:14 Gionata Boccalini gionata.boccal...@gmail.com
wrote:
OK , but then the solution with symlinks is equivalent, just with the
right options for rsync.
Make the link.
Sync + exclude.
Remove the link.
Hello,
I've been tasked with migrating a smallish (@90 mailboxes) company from
a linux/dovecot mail server to Office 365, and after experiencing a ton
of issues with Microsoft's native Imap syncing tool, I decided to use
Imapsync, and it is working perfectly.
It has the ability to add a simple
Thanks Joe for the reply:
1) why do you say to use fuzzy twice? Do you mean in both directions?
2) I have to mention that the remote system is a Synology NAS, which for
whatever reason (I can't think about), doesn't support symlinks, even in
the same disk volume or share!
But I could make some
OK , but then the solution with symlinks is equivalent, just with the right
options for rsync.
Make the link.
Sync + exclude.
Remove the link.
Don't have to live with the folder on the source.
*___Gionata Boccalini*
2015-06-08 22:49 GMT+02:00 Michael Johnson - MJ m...@revmj.com:
I should describe the problem more in details, but I believe this is off
topic for this list.
The FolderA is named Musica (in Italian) because.. I like it that way..
and is in my home folder.
PC # /home/gionata/Musica
FolderB MUST be named music, in my home folder on the NFS filesystem,
NAS #
The symlinks was mostly a shot in the dark. They're often useful when
you need synonyms.
The --fuzzy: I believe once handles different names and the second one
adds different locations.
I have thought about using it for issues I have reorganizing collections
of media files, but never got
I'm sure one of the experts will have a better answer, but two things
come to mind as options to explore:
1) Use --fuzzy twice so files which are the same but possibly with
different names and locations are synced
2) Use some sort of symlinks on the destination so the names actually
match
On Wed, Jul 16, 2014 at 6:40 PM, Don Cohen don-rs...@isis.cs3-inc.com
wrote:
An output line like asd\#002\#003zxc could either mean a file of that name
or asd^B\#003zxc or asd^B^Czxc or asd\#002^Czxc
Did you test that theory? Give it a try and you'll discover that \#
followed by 3 digits in
An output line like asd\#002\#003zxc could either mean a file of
that name or asd^B\#003zxc or asd^B^Czxc or asd\#002^Czxc
Did you test that theory? Give it a try and you'll discover that \#
followed by 3 digits in a filename always encodes the backslash, so
there is never an
Hi.
On Wed, 16 Jul 2014 23:24:45 -0700 Don Cohen wrote:
So another question/suggestion - if you save the output it would be
nice to be able to pipe it back into rsync as the list of files to
be transferred - which would be easier if there were a switch to do
the translation above. ...
Not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
The solution you are missing is that rsync can archive files itself
using either --link-dest or --backup depending on whether you want a
complete tree in the archive or not.
On 07/16/2014 09:40 PM, Don Cohen wrote:
It seems to me that this output
On Fri, Jan 3, 2014 at 12:39 PM, Bill Dorrian dorrian.2...@comcast.netwrote:
The script that I'm running works - sort of - in that it syncs the files;
but it syncs their parent directories too, which I'm trying to avoid.
--files-from implies -R (--relative), which tells rsync to include the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I don't know of a simple solution that would work with both --delete
and with any number of files but here is an idea...
Make an additional folder and link all the mp3 files into it then
rsync that folder...
rm -rf /backup/Music.flat
mkdir
On 2014-01-01 2:02 PM, Wayne Davison way...@samba.org wrote:
On Tue, Dec 31, 2013 at 3:59 AM, Charles Marcus
cmar...@media-brokers.com mailto:cmar...@media-brokers.com wrote:
On the old server, dovecot is configured to just use
.../example.com/user http://example.com/user for the
On Tue, Dec 31, 2013 at 3:59 AM, Charles Marcus
cmar...@media-brokers.comwrote:
On the old server, dovecot is configured to just use .../example.com/userfor
the maildirs.
On the target server, I want to change this to .../
example.com/user/Maildir
One thing you can do is to add a symlink
On Thu, Feb 21, 2013 at 2:11 PM, Jason Keltz j...@cse.yorku.ca wrote:
As far as I understand, even though rsync is running on the client, the
server is trying to write the batch file locally?
No, the batch file is always output by whatever side is running the rsync
command. You either need
On Fri, Aug 10, 2012 at 9:03 AM, T.J. Crowder t...@crowdersoftware.comwrote:
1. Am I correct in inferring that when rsync sees data for a file in the
--partial-dir directory, it applies its delta transfer algorithm to the
partial file?
2. And that this is _instead of_ applying it to the real
On Sun, Aug 12, 2012 at 10:41 AM, Wayne Davison way...@samba.org wrote:
I have imagined making the code pretend that the partial file and any
destination file are concatenated together for the purpose of generating
checksums.
Actually, that could be bad if the destination and partial file
Hi,
Thanks for that!
On 12 August 2012 18:41, Wayne Davison way...@samba.org wrote:
I have imagined making the code pretend that the partial file and any
destination file are concatenated together for the purpose of generating
checksums. That would allow content references to both files,
On Tue, Nov 22, 2011 at 3:40 PM, Chris Adams chris.a.ad...@state.or.uswrote:
I would like to include the IP and/or hostname of the machine being
backed up
Since you are initiating the transfer of a remote machine, you can put
whatever you like into your log string option as literal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I do not understand the context of your question. However, a networked
rsync not using --whole-file will do a delta xfer if it sees a
difference in mtime or file size. The fact that the file was deleted
and recreated vs modified while rsync was not
Hi,
Thanks for the reply.
I don't understand why I sent so many message.. can be my outlookJ.. sorry
again
It seems very strange that I write before but it was that happen to me.
I have a remote application that every 3 hours recreate the file x that the
first time will be moved
Hi,
Thanks for the reply.
I don't understand why I sent so many message.. can be my outlookJ.. sorry
again
It seems very strange that I write before but it was that happen to me.
I have a remote application that every 3 hours recreate the file x that the
first time will be moved
Hi,
Thanks for the reply.
I don't understand why I sent so many message.. can be my outlookJ.. sorry
again
It seems very strange that I write before but it was that happen to me.
I have a remote application that every 3 hours recreate the file x that the
first time will be moved
Hi,
Thank you very much, it's working with --inplace --no-whole-file.
Now snapshots of a 15GB database backup only takes a few kilobytes a day
instead of 15GB.
Mickaël
On Wed, 2011-06-22 at 18:02 +0100, jer...@jeremysanders.net wrote:
Mickaël CANÉVET wrote:
I was wondering if there is a
Mickaël CANÉVET wrote:
I was wondering if there is a way top specify rsync to replace only
different block in case of in-place update.
When I rsync a huge binary file that change often to a Copy-On-Write
filesystem for backing it up (ZFS in my case, but I suppose that btrfs
will act the
On Mon, 2009-03-16 at 09:55 -0600, Paul E Condon wrote:
The --backup option in GNU mv, and GNU cp extend the behavior of the
-b option in a significant way, I believe. --backup allows
specification of versioned backups, especially numbered backups, e.g.
The old version of file, foo, becomes
On Thu, Dec 04, 2008 at 05:58:05PM -0600, Larry Hayes wrote:
I have tried several other combinations of '\'' and single and double
quoting the entire path or just the filename, with no luck.
There's no such thing as quoting in an include/exclude file. Anything
after an initial -/+ and a space
Matt McCutchen wrote:
(It would have made my life easier if you had replied directly to that
message so I didn't have to search for it.)
Ahh yes - it was an old thread back from June - I kinda did, but the
question was a bit weird different and the thread ended.
Either way, sorry.
If I
On Wed, 2008-10-22 at 14:38 +0200, Michal Soltys wrote:
A good while ago I asked about difference between --delete-during/delay
and --delete-after, when per-directory files are updated (all is
perfectly clear for me here), but during the discussion there was a hint
made by Wayne, that the
I've removed the file from the destination machine and still received the
error. When I disabled the preallocate option it worked. I suspect there
is an issue in the cygwin preallocate option in 1.7. I'll report it to the
cygwin message board.
Rob
--
Please use reply-all for most replies to
Thanks for the tip. The destination is on a fibre channel array. I'm able
to replicate the issue when trying to rsync locally and I get a read error.
I'm wondering if it is a hardware issue. I'm deleting the file and letting
rsync recreate it...then I'll see if the issue occurs again. You're
On Tue, 2008-09-30 at 14:20 -0600, Rob Bosch wrote:
2008/09/30 12:09:55 [12508] rsync: write failed on /EDrive/testfile.edb
(in Test.Backup):Resource temporarily unavailable (11)
That error is coming from the destination filesystem. What happens if
you copy the files to another place on the
Forgot to mention that this is my command syntax:
rsync -gloprtuvz -e ssh --delete --log-file=/var/log/rsync-transfer.log
--output-format=%i srcServer:/srcDir dstServer:/dstDir
Thanks,
Chris Bidwell, RHCT
Web Administrator
Geologic Hazards Team
US Geological Survey
email:
On Sun, Dec 16, 2007 at 07:46:56PM -0500, Ming Zhang wrote:
I had sent a memory leak fix in print_rsync_version() a while go. not
sure if that was considered? or just leave that to OS cleanup?
I had decided that since the leak was in a function that is about to
exit that I didn't want to add
On Wed, 2007-12-12 at 14:13 +, Chris G wrote:
I was expecting that if I specified the --copy-unsafe-links option to
rsync that I'd then get no warnings about 'skipping non-regular file
bla/bla/bla' but it doesn't seem to work like that.
You have to additionally pass --links to make rsync
On Wed, Dec 12, 2007 at 09:55:36AM -0500, Matt McCutchen wrote:
On Wed, 2007-12-12 at 14:13 +, Chris G wrote:
I was expecting that if I specified the --copy-unsafe-links option to
rsync that I'd then get no warnings about 'skipping non-regular file
bla/bla/bla' but it doesn't seem to
On 3/8/07, Allan Gottlieb [EMAIL PROTECTED] wrote:
This may indeed be working correctly, but I noticed that no matter how
many -v I use (I tried up to 4) I could not get a confirmation that
local-0 was found to agree with the copy on the target, even though I
use --checksum.
I do see several
On Thu, Feb 01, 2007 at 10:06:48PM -0500, Matt McCutchen wrote:
While we're on the topic: I was dismayed to discover a while ago that
rsync doesn't allow different kinds of basis dirs in the same command
(e.g., --compare-dest=foo --link-dest=bar).
I'm trying to imagine how that would be useful
On 2/16/07, Wayne Davison [EMAIL PROTECTED] wrote:
I'm trying to imagine how that would be useful because one of the things
that the options do is to control how the destination hierarchy is
populated, and there's only one destination hierarchy. About the only
useful combination I can come up
On 1/30/07, Wayne Davison [EMAIL PROTECTED] wrote:
You're right. That means that the multi-option version of compare-dest
is not working as it should. I need to change the code so that rsync
creates a new version anytime the most recent version of the file
differs from the sender's version
On Mon 29 Jan 2007, Matt McCutchen wrote:
On 1/29/07, Wayne Davison [EMAIL PROTECTED] wrote:
If you
want to store the new, changed files, use one or more --compare-dest
options (one pointing at an old full backup, and an extra option for any
intervening incrementals).
This approach won't
On Mon, Jan 29, 2007 at 04:56:16PM -0500, Matt McCutchen wrote:
This approach won't work because rsync will skip a file if it is in
the same state now as in any of the backups, not just the most recent
one. Thus, if I change a file and change it back, the fact that I
changed it back would not
On Mon 29 Jan 2007, Blake Carver wrote:
I current do some rsync backups with a command like so every day
rsync -az -e ssh --stats --delete --exclude stuff /
[EMAIL PROTECTED]:/home/user/
What I want to do is have some incremental backups in there in
subdirectories. So, for example,
On Mon, Jan 29, 2007 at 10:34:39AM -0500, Blake Carver wrote:
I thought the --backup --backup-dir Switches were used to store just
the files that had changed in seperate directories, am I wrong on
that?
It stores the old files that are being updated or deleted, moving (or
copying) them before
On 11/27/06, Ben Anderson [EMAIL PROTECTED] wrote:
I'm using cwrsync (with rsync 2.6.9) via ssh
Careful: when we say rsync via ssh, we usually mean that the client
rsync invokes a second instance of rsync on the server as the ssh
remote command. Your setup counts as talking directly to an
100gb of 4-40MB files sounds like my home PC full of digital photos I've
taken. It backs up to a linux PC right beside it with rsync. I don't
really call it that big a project for rsync. Big things for rsync are
millions of files. At 100mbps, it takes a few seconds to build the list.
I use the
jp wrote:
100gb of 4-40MB files sounds like my home PC full of digital photos I've
taken. It backs up to a linux PC right beside it with rsync. I don't
really call it that big a project for rsync. Big things for rsync are
millions of files. At 100mbps, it takes a few seconds to build the
Jamie Lokier wrote:
Hmm. My home directory, on my laptop (a mere 60GB disk), does contain
millions of files, and it takes about 20 minutes to build the list on
a good day. 100Mbps network, but it's I/O bound not network bound.
It looks a lot like the number of files is more significant than
On Mon, Mar 06, 2006 at 07:18:45PM +0200, Shachar Shemesh wrote:
In fact, I know of at least one place where they don't use rsync because
they don't have enough RAM+SWAP to hold the list of files in memory.
As far as future directions for rsync, I think this is the major place
where rsync
Shachar Shemesh wrote:
Hmm. My home directory, on my laptop (a mere 60GB disk), does contain
millions of files, and it takes about 20 minutes to build the list on
a good day. 100Mbps network, but it's I/O bound not network bound.
It looks a lot like the number of files is more significant
Wayne Davison wrote:
On Mon, Mar 06, 2006 at 07:18:45PM +0200, Shachar Shemesh wrote:
In fact, I know of at least one place where they don't use rsync because
they don't have enough RAM+SWAP to hold the list of files in memory.
As far as future directions for rsync, I think this is the
Jamie Lokier wrote:
While you're there, one little trick I've found that speeds up
scanning large directory hierarchies is to stat() or open() entries in
inode-number order. For some filesystems it makes no difference, but
for others it reduces the average disk seek time as on many common
Shachar Shemesh wrote:
While you're there, one little trick I've found that speeds up
scanning large directory hierarchies is to stat() or open() entries in
inode-number order. For some filesystems it makes no difference, but
for others it reduces the average disk seek time as on many common
Object: Re: Question about rsync and BIG mirror
Thanks for all your answers and advices. My problem seems on the side of
the 2MB line one time the whole 190GB data are synchronised. I will keep
in touch and give some feedbacks.
Thanks for all
--
To unsubscribe or change options: https
[EMAIL PROTECTED] wrote:
Hello,
So: each night, from 0:00am to maximum 7:00am, the server will have to
check the 100Go of files and see what files have been modified, then,
upload them to the clients. Each file is around 4MB to 40MB in average.
Are the clients what you call the mirror?
On Fri, 2006-03-03 08:02:55 +0100, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
// I wonder if this message has been posted, so I sent it again //
It was, but nobody answered yet.
I'm preparing a plan for a production mode in my company: we need to
mirror around 100GB of data trough a special
Flames invited if I'm wrong on any of this, but:
Some (long overdue) backups indicate that network speed
should be much more important than cpu speed.
Your results will depend heavily on your exact mix
and I cannot think of any reasonable way to quantify it.
That said, this may help give you a
[mailto:[EMAIL PROTECTED]
Envoyé : samedi 1 octobre 2005 17:52
À : NGUYEN, Laurent (ext.)
Cc : rsync@lists.samba.org
Objet : Re: question about librsync : patch function
NGUYEN, Laurent (ext.) wrote:
About librsync, does anyone know how to patch the delta without
creating a new file ?
While
Mario Tambos wrote:
the summatory of the file's transferred bytes is 48542663. it doesn't
match the received bytes (about 8mb less)
That's because the summary total includes data that was sent outside of
the file transfers, such as the data for the file list (which is
probably the majority
On Tue, Sep 27, 2005 at 02:48:39PM -0500, Max Kipness wrote:
This works fine, however when trying to use cp -al to make incremental
copies, each copy always ends up being 53Gb in size.
How are you measuring that? If you use du on individual directory
hierarchies, it will always report the full
On Thu, 2005-08-18 04:45:21 -0500, Evan Harris [EMAIL PROTECTED] wrote:
Is there any way to disable the checksum block search in rsync, or to
somehow optimize it for systems that are processor-bound in addition to
being network bound?
By design, rsync trades CPU power for bandwidth.
Option
On Thu, 18 Aug 2005, Jan-Benedict Glaw wrote:
By design, rsync trades CPU power for bandwidth.
True. But just because that is it's main focus doesn't mean we can't also
provide a facility for hinting the types of files being transferred to
lessen the impact of that tradeoff for systems that
On Thu, Aug 18, 2005 at 04:45:21AM -0500, Evan Harris wrote:
Is there any way to disable the checksum block search in rsync, or to
somehow optimize it for systems that are processor-bound in addition
to being network bound?
The --whole-file option (-W) disables the rsync algorithm entirely,
On Thu, 18 Aug 2005, Wayne Davison wrote:
The --whole-file option (-W) disables the rsync algorithm entirely, but
not the full-file checksum to verify that the file was transferred
correctly.
Unfortunately, for these huge files, I don't want to retransfer the part
that has already been
On Thu, Aug 18, 2005 at 01:48:08PM -0500, Evan Harris wrote:
Will that be going into the upcoming 2.6.7 version?
Yes.
One question: does it also do a rudimentary check to make sure that
the last block that is still present still matches on the sender and
receiver, so it can catch files and
On Tue, Jun 28, 2005 at 02:22:22PM +0200, [EMAIL PROTECTED] wrote:
Rsync works fine for me (the rules are reported below) except a point,
rsync create an empty folders structure that I don'want.
The only way to get rsync to not create directory hierarchies that don't
contain *.txt files is to
On Fri, Sep 24, 2004 at 03:47:03PM +0200, Paul Slootman wrote:
If I then run it again, I get the following [a different hashed file]
I didn't see that in my just-run test. I did notice a problem with the
code not removing an existing destination file prior to trying to hard
link a hashed file
One thing that the link-by-hash patch needs is an additional close();
without that, I quickly ran into too many open files.
--- hashlink.c.old 2004-09-24 10:59:12.0 +0200
+++ hashlink.c 2004-09-24 10:59:20.0 +0200
@@ -280,6 +280,7 @@
}
The --link-by-hash patch is a bit defective, I think.
If I run the following command:
rsync --link-by-hash=/tmp/hash 192.168.1.1::mirrors/ps1 /tmp
I get the following output:
(1) linkname = /tmp/hash/0fb9ca1a/3cc6ec7f5a2de3a0235b585f/0
link-by-hash (new): /tmp/ps1 -
On Wed 22 Sep 2004, Erik Jan Tromp wrote:
On Wed, 22 Sep 2004 13:21:31 +0200
Paul Slootman [EMAIL PROTECTED] wrote:
I had hoped to use it both for my rotating backups for my (unofficial)
slackware mirror.
Hmmm... For a slackware mirror I expect that it would be fine.
To my eyes,
On Wed 22 Sep 2004, Wayne Davison wrote:
On Wed, Sep 22, 2004 at 04:54:32AM -0400, Erik Jan Tromp wrote:
Are there plans to make --link-by-hash pay attention to file externals?
The issue has come up before:
http://lists.samba.org/archive/rsync/2004-February/008630.html
I don't know of
On Thu, Sep 23, 2004 at 04:14:27PM +0200, Paul Slootman wrote:
On Wed 22 Sep 2004, Erik Jan Tromp wrote:
rsync://rsync.samba.org/ftp/unpacked/rsync/patches/link-by-hash.diff
Unfortunately that seems to have tabs expanded, and at one point a
line was wrapped.
The unpacked files are taken
On Wed 22 Sep 2004, Erik Jan Tromp wrote:
I had noticed the --link-by-hash patch a short while back decided it was time to
experiment with it. Sadly, its behaviour is considerabely different from what I
expected - to the point that I find it unusable in its current form. I had hoped to
On Wed, 22 Sep 2004 13:21:31 +0200
Paul Slootman [EMAIL PROTECTED] wrote:
I had hoped to use it both for my rotating backups for my (unofficial)
slackware mirror.
Hmmm... For a slackware mirror I expect that it would be fine.
To my eyes, a mirror implies a duplicate fileset
On Wed, Sep 22, 2004 at 04:54:32AM -0400, Erik Jan Tromp wrote:
Are there plans to make --link-by-hash pay attention to file externals?
The issue has come up before:
http://lists.samba.org/archive/rsync/2004-February/008630.html
I don't know of any plans for changing the --link-by-hash patch,
On Fri 10 Sep 2004, Wayne Davison wrote:
As indicated in the rsyncd.conf man page, the command should be this:
rsync stream tcp nowait publish /usr/bin/rsync rsyncd --daemon
Ah, I searched the rsync man page for 'inetd' and didn't find
anything... As it's about usage of rsync,
On Sat, Sep 11, 2004 at 12:21:43PM +0200, Paul Slootman wrote:
Ah, I searched the rsync man page for 'inetd' and didn't find
anything...
The --daemon option mentions inetd, and its text tells you to read the
rsyncd.conf manpage for more details. I think having the daemon-mode
specific details
On Fri 10 Sep 2004, Kick Claus wrote:
we would like to use rsync (2.6.2 manualy patched and recompiled) in daemon
mode spawned by inetd (Solaris 5.8 Environment).
Hmm, I don't know whether this is supported...
rsync stream tcp nowait publish /usr/bin/rsync rsyncd --daemon --port 1234 .
Date: Fri, 10 Sep 2004 13:35:30 +0200
From: Paul Slootman [EMAIL PROTECTED]
Hello Paul,
we would like to use rsync (2.6.2 manualy patched and recompiled) in
daemon
mode spawned by inetd (Solaris 5.8 Environment).
Hmm, I don't know whether this is supported...
Hm, then lets simply wait for
On Fri, Sep 10, 2004 at 12:05:37PM +0200, Kick Claus wrote:
rsync stream tcp nowait publish /usr/bin/rsync rsyncd --daemon --port 1234 .
As indicated in the rsyncd.conf man page, the command should be this:
rsync stream tcp nowait publish /usr/bin/rsync rsyncd --daemon
(I changed
In particular, each additional file means rsync needs more memory.
If you find that you don't have enough memory (or that you start
to swap!!) the easy solution is to split the rsync job into two
or more pieces. If you've got any sort of directory hierarchy
this should be simple.
On Sun, Aug
To be short - yes, we transferred larger quantities of files using rsync. Just make
sure you have enough memory on the machines which run it, at least 512MB free for
rsync's use and it'll sync a million file partition.
So if you're sure you only got around 100K files, you'll be fine (does not
On Mon, Aug 02, 2004 at 11:01:39AM -0600, Brashers, Bart -- MFG, Inc. wrote:
If it's easy, maybe this would be a good addition to rsync --stats. List
the size of the files deleted, and perhaps the net change in disk usage.
Yes, an addition like that sounds like a good idea to me. It will
On Mon, Jun 14, 2004 at 09:22:45AM -0400, Wallace Matthews wrote:
Could it be simply using rsync and using an older rsync that
happens to be installed and in the path??
That's the normal case if you don't use the --rsync-path=/PATH/rsync
option to tell rsync what program to run. It is easy to
: Re: question
On Mon, Jun 14, 2004 at 09:22:45AM -0400, Wallace Matthews wrote:
Could it be simply using rsync and using an older rsync that
happens to be installed and in the path??
That's the normal case if you don't use the --rsync-path=/PATH/rsync
option to tell rsync what program to run
[EMAIL PROTECTED] wrote:
hello
i looked in google , faq etc but didnt found a answer.
sorry if i overseen something.
how i can auth. via a hostkey without make a config in ~/.ssh
normaly ssh has support with ssh -i /keyfile is there any way to combine it
via rsync , with
rsync -e ssh -i key ..etc
I had this problem trying to script an unattended backup. (rsync 2.6.1
on cygwin)
I found that if you need to pass command line arguments to ssh you need
to use:
rsync --rsh=ssh -i key
Using -e, if I remember it correctly, just tries to execute a command
called ssh -i key which, obviously,
rsync -e ssh -i key ..etc etc does not work
The best thing you can do is to use -vv to see what command rsync is
running and then try a similar command (e.g. use rsync --help instead
of the server command) to see what is going wrong with your ssh setup.
Also, avoid a path that requires
hello
a.) ssh with key alone works fine (ssh -i key host command)
b.) i try too the method like from braunsdorf (write e shells cript with
ssh commands then rsync -e script)
maybe i just to stupid for the rsync commands here what i need or ?
rsync -e scriptname (content of script)
in scripts
ssh
1 - 100 of 150 matches
Mail list logo