Joe Ruby wrote:
I'm trying to do a simple rsync:
rsync -av [EMAIL PROTECTED]:/backup .
But a number of files in /backup are readable only by
root, and hence rsync gives these errors:
sync: send_files failed to open
"/backup/etc/mail/virtusertable.db": Permission denied
(13)
Since root login i
I have included the rsync maillist address in the reply as you are more likely
to get useful responses by posting to the list.
[EMAIL PROTECTED] wrote:
Linus,
started the massive transfer based on the historical timings, but now I am facing lots of problems in the transfer...I am getting lot
Paul Slootman wrote:
On Wed 22 Mar 2006, Linus Hicks wrote:
Paul Slootman wrote:
I'd recommend doing --inplace, as chances are that data won't move
within a file with oracle data files (so it's not useful to try to find
moved data), and copying the 4TB to temp. files every time
Paul Slootman wrote:
On Tue 21 Mar 2006, lsk wrote:
I don't know how it would work if we do rsync with the files--from option ?
I'm not sure how rsync behaves when confronted with a network problem
during a session, so I won't give an answer to that.
However, doing individual files sounds rea
Carson Gaspar wrote:
--On Friday, March 03, 2006 9:21 AM -0500 Linus Hicks
Please configure your email client to not quote email addresses...
wrote:
This is certainly not true for the source machine. It typically has 70gb
free (it's still running a 32-bit Oracle database server)
Wayne Davison wrote:
On Fri, Mar 03, 2006 at 09:21:25AM -0500, Linus Hicks wrote:
I'm transferring one file, which is obvious from my command line. Is the
FAQ incorrect?
The FAQ is incomplete in how the size of the file can affect the sender's
memory. If the destination file alre
Linus Hicks wrote:
Wayne Davison wrote:
On Thu, Mar 02, 2006 at 02:07:14PM -0500, Linus Hicks wrote:
I do not understand the exceeding long times shown in the last two
runs.
Since the user/sys CPU time didn't also mushroom, I would suggest that
you check to see if your system ran out of
Wayne Davison wrote:
On Thu, Mar 02, 2006 at 02:07:14PM -0500, Linus Hicks wrote:
I do not understand the exceeding long times shown in the last two
runs.
Since the user/sys CPU time didn't also mushroom, I would suggest that
you check to see if your system ran out of free memory and st
Here's my contribution to information on performance. There are two different
cases. The first is a 1.6gb file that has a low volume of updates. The second
case is a 4gb file that has a high volume. All are non-local transfers. I do not
understand the exceeding long times shown in the last two r
Matt McCutchen wrote:
On Mon, 2006-02-27 at 06:58 -0800, lsk wrote:
Could you give an example with syntax for rsync using file
option "--files-rom=FILE".
If my-list in the current directory contains
a
b
b/c
b/d
b/d
Matt McCutchen wrote:
On Fri, 2006-02-24 at 18:40 -0500, Linus Hicks wrote:
I did something similar to what lsk is doing a few months back, I believe using
rsync 2.6.5. I wrote a script to query the database for all the datafiles and
rsync'ed them individually by specifying the full pa
Matt McCutchen wrote:
On Fri, 2006-02-24 at 11:08 -0800, lsk wrote:
/// lsk:- Thanks for the clarification Wayne, in my case no one
would be allowed to use the destination file until the process is
complete. As soon as my destination server is upgraded to the newer
version of rsync whic
Matt McCutchen wrote:
On Wed, 2006-02-22 at 11:43 -0800, lsk wrote:
Currently I use "rsync -czv" c for checksum.
If each data file's first few bytes ("header information") change
between rsync transfers, then --checksum buys you nothing. Normally
rsync will skip transferring a file if the
John Gallagher wrote:
If you want to handle errors from rsync in your shell script,
then remove the "-e" and test for errors after your call to rsync.
Linus
Let me start by saying my shell scripting skills are very weak, as if that
were not already apparent.
I understand the -e will exit wh
René Rebe wrote:
Hi,
On Wednesday 11 January 2006 04:28, John Gallagher wrote:
The problem is that I have no clue what to do with this and or how to make
it work with my script.
If you want hide errors, remove the -e from your sh invocation or add || true
at the end of the rsync invocation
Harish wrote:
Hi everyone,
I want to back up my database using log files.
1. Is it possible to backup database using rsync?
2. Can it copy redo log file which are open?
3. It has any special feature to handle redo log files of database while
copying?
If you are talking about for instance, a
Ronan Guilfoyle wrote:
I'm going a little further than that;
The script backs up all open files on the server, but I sync them
directly to a standby server with Exchange installed (services are shut
down to allow me to write to the databases).
The same scripts also transfer SQL databases and
Sameer Kamat wrote:
Hello,
I am synchronizing one ~15GB file over the network. This file from
the previous day exists on the destination and I synchronized today's
file over. This is the output.
Number of files: 1
Number of files transferred: 1
Total file size: 15919685632 bytes
Total
Stefan Nehlsen wrote:
On Mon, Sep 12, 2005 at 09:36:27AM -0600, Kevin Stussman wrote:
rsync will have a second try if this happens and I think it will warn.
This seems like a waste of resources to me. Why not query V$ARCHIVE_LOG?
From the manual:
This view displays archived log information
Poe, David wrote:
We have nearly 200 GB of data in a production Oracle database broken up
into about 100 files of 2 GB. The database incurrs a 5% change per week
in the form of new data, no modification nor deletions. I need to copy
this data from one mount point to another then bring up the
Wayne Davison wrote:
On Mon, Aug 29, 2005 at 02:24:08PM -0400, Linus Hicks wrote:
Mainly, it was apparently defaulting to using whole-file mode
If you're doing a local copy, --whole-file mode is *much* faster. Using
--no-whole-file doubles your disk I/O, which is only a good thing if
We used rsync 2.6.3 on a couple of Solaris 8 machines to update an Oracle
database from one machine to another. Here is the procedure I used:
The source database was up and running so this operation was similar to doing a
hot backup. I queried the source database for a list of tablespace names,
22 matches
Mail list logo