On Wed, Jan 24, 2001 at 12:05:02AM -0800, Harry Putnam wrote:
Anyone here know if redhat linux updates can be rsynced?
If so, is it necessary to have rsh installed.
I guess what I really need is to see the commands necessary to connect
to a redhat `updates' ftp site with rsync. If it is
On Wed, Jan 24, 2001 at 11:11:32AM +1100, Kevin Saenz wrote:
Ok I have just inherited this system.
For my lack of understanding please forgive me
I believe that rsync in running in --daemon mode
the version of rsync we are using is 2.4.6
also if this helps we are running rsync using the
On Wed, Jan 24, 2001 at 11:59:18AM -0500, John Stoffel wrote:
Hi all,
This is a followup to bug report 2779 on the rsync bug tracking web
site, I'm also seeing the hang in waitid() on the master process when
trying to do an rsync on a single host.
Basically, I've got a server with
Anyone here know if redhat linux updates can be rsynced?
If so, is it necessary to have rsh installed.
I guess what I really need is to see the commands necessary to connect
to a redhat `updates' ftp site with rsync. If it is even possible.
I have a script that I run with cron
On Wed, Jan 24, 2001 at 11:59:18AM -0500, John Stoffel wrote:
Hi all,
This is a followup to bug report 2779 on the rsync bug tracking web
site, I'm also seeing the hang in waitid() on the master process when
trying to do an rsync on a single host.
snip
Thanks,
John
John Stoffel
Dave Other people have reported similar experiences but nobody has
Dave pointed to a problem in rsync; the problem is more likely to be
Dave in NFS on the NetApp or Solaris machines. I believe most NFS
Dave traffic goes over UDP but do you happen to know if it using TCP?
Dave We have seen many
Hi Listers,
I hope this posting qualifies for your acceptance.
I'm working on a Korn shell script to using rsync to synchronize several Sun
hosts running Solaris 2.7.
Below is the error message that I get. I'm not sure if there is a log file
that can provide more information, but I checked
Quick follow to my previous posting today.
I'm now using the -W flag on 2.4.6 again and it just hung. Luckily a
'kill -HUP' to the process it's waiting on will do the trick in terms
of killing rsync cleaning. But it also means I have to sit and watch
the damm thing do it's work and wait
"Cameron" == Cameron Simpson [EMAIL PROTECTED] writes:
Cameron The other day I was moving a lot of data from one spot to
Cameron another. About 12G in several 2G files. Anyway, I
Cameron interrupted the transfer because I'd chosen a fairly slow
Cameron way of doing it, and
On Wed, Jan 24, 2001 at 03:48:06PM -0500, John Stoffel wrote:
...
or it doesn't have a good heuristic that says:
if I don't get *any* info after X seconds, just die
where X would be something like 900 or 1200 seconds, which seems like
a reasonable number.
Have you tried --timeout?
-
Kevin Saenz [[EMAIL PROTECTED]] writes:
I guess that might be the case but there is one question left to ask
the total files that we rsync has not changed. why would this task
cause problems all of a sudden?
If it's not the per-file overhead adding up, have you suddenly picked
up a huge file
Cameron Simpson [[EMAIL PROTECTED]] writes:
| Cameron The other day I was moving a lot of data from one spot to
| Cameron another. About 12G in several 2G files. [...]
| Cameron so I used rsync so that its checksumming could speed past
| Cameron the partially copied file. It
12 matches
Mail list logo