You can do a hot backup by simply setting all the tablespaces in backup
mode (alter tablespace 'BLAA' begin backup).
Then you can rsync the database files and do an 'end backup' on each
tablespace after.
This will give you files that are consistient and the database will be
recoverable from tho
I do a daily sync to about 20 locations of 2.4 million files. I think rsync is
very much 'professional' grade. I have done this sync of 2.4 million files both
as a single rsync of the tree as well as splitting the tree up in to multiple
rsyncs. (essentially a 'for dir in ls;do rsync $dir dest:blaa;
Wayne Davison wrote:
>
> On Tue, Apr 27, 2004 at 11:52:11AM -0600, Eric Whiting wrote:
> > ... but here is some representative data:
>
> Thanks for the confirming stats. Some questions/comments:
>
> You didn't mention what command you ran, so I'm curious if th
[EMAIL PROTECTED] wrote:
>
> Rsync version 2.6.1 has been released. It is primarily a performance
> release that requires less memory to run, makes fewer write calls to
> the socket (lowering the system CPU time), does less string copying
> (lowering the user CPU time), and also reduces the amoun
If you are moving data across a network then you need a network transport
mechanism. If you specify a [EMAIL PROTECTED]:dir in the source or destination then you
need a transport. ssh is the default transport in 2.5.7. If you want to use rsh
then you must specify -e rsh. (or another mechanism)
I
Wayne Davison wrote:
>
> On Mon, Mar 22, 2004 at 04:49:15PM -0700, Eric Whiting wrote:
> > rsync (2.5.[67]) --delete fails on dirs with the w bit cleared. (example below)
> > Rsync will sync a dir with w bit clear, but will not remove it with --delete.
>
> It'
I still get the same error with --force --delete.
There needs to be a chmod on dir before the files and dir can be deleted.
eric
Tim Conway wrote:
>
> --force force deletion of directories even if
> not empty
>
> SunOS 5.8 Last change: 26 Jan 2003
rsync (2.5.[67]) --delete fails on dirs with the w bit cleared. (example below)
Rsync will sync a dir with w bit clear, but will not remove it with --delete.
This is not a big problem, but it will create situations where there are
'orphaned' files.
Has anyone else had this problem?
It looks li
2 things to do that will fix things...
1. Read man ssh and create public/private ssh keys with an empty passphrase.
This will let the rsync run without a password -- you can cron it and it will
just work... You can also use .rhosts and sync over rsh, but ssh with the keys
is a better solution.
2.
Kenny Gorman wrote:
>
> >>I am rsyncing 1tb of data each day. I am finding in my testing that
> >>actually removing the target files each day then rsyncing is faster than
> >>doing a compare of the source->target files then rsyncing over the delta
> >>blocks. This is because we have a fast link
jw schultz wrote:
>
> I was thinking more in terms of no block relocation at all.
> Checksums only match if at the same offset. The receiver simply
> discards (or never gets) info about blocks that are
> unchanged. It would just lseek and write with a possible
> truncate at the end.
This would
I've learned some good things from this discussion. THanks.
Kenny, I have one concern/idea -- The original post says the 'disk is
fairly slow'. That is one bottleneck that should probably be examined a
little more. How fast are your disks? HOw fast is your network? An IDE
disk with DMA disabled mi
sync time but hurt in terms of network loading.
I think some have suggested different -B options for larger files as
well -- but I'm not sure about what might work best with oracle
datafiles -- probably a -B that is the same size as the db_block_size.
eric
Eric Whiting wrote:
I think the
I think the -W option might do what you would have described here.
eric
Kenny Gorman wrote:
I am rsyncing 1tb of data each day. I am finding in my testing that
actually removing the target files each day then rsyncing is faster
than doing a compare of the source->target files then rsyncing
I have observed this same problem. Are you running --delete-after? I
assumed it might be related to that option.
eric
David Garamond wrote:
>
> our daily backup is done using the rdiff-backup tool, which in turn
> utilizes rsync/librsync to do the actual mirroring work.
>
> a few days ago
I noticed this behavior earlier. Thanks for the patch.
eric
Ruediger Oertel wrote:
>
> Hi,
>
> we ran into a little problem with rsync-2.5.5.
>
> Setup: you run rsync-2.5.5 as normal rsync over ssh
> (ie. not connecting to a rsync server). If you start
> such a rsync but interrupt the pullin
Trevor Marshall wrote:
>
> Lets see, oh... /proc/version
> "Linux version 2.4.10-4GB ([EMAIL PROTECTED]) (gcc version 2.95.3
> 20010315 (SuSE)) #1"
We had a lot of bad luck with that kernel (suse 7.3 right?) -- but it
was mainly resierfs/nfs issues (links garbled, nfs hangs, filesystem
issues).
I still see 2.5.5 hangs where the destination rsync exits (for whatever
reason -- I don't see any errors) and the source waits on something
forever. The interesting thing about it is when I strace the child rsync
pid on the source side then rsync exits as it should. Strace must
introduce a timing
Dave North wrote:
>
> We have an interesting quandry here. When I'm rsync'ing my directory
> tree (100 directories each containing 1000 files) I see some strange
> results:
>
> All of these machines are on their own network segment (100t) in our QA
> lab
>
> Solaris->Solaris - time taken: 11m3
Some new data on my rsync hangs:
I run about 1500 rsync sessions over ssh daily. In the last 8 days that
adds up to about 12k rsync sessions. Of those 12k sessions, 10 are right
now sitting in a hung state. The rsync process on the destination has
exited, but both rsync processes on the source
Yes you should be concerned about this problem.
I suggest these things:
1. Try running without -z. Some versions of rsync (2.5.4?) had a libz
bug.
2. Better yet, upgrade both sides to 2.5.5 and retry.
3. Make sure your solaris box has the latest NFS patches.
eric
Michael Lachmann wrote:
>
I think I was one of the 3 people to see this problem.
Our setup was a Solaris 8 client writing to a Network Appliance NFS
server. We do our rsyncs direct to the box with the disk on it when we
can, but in the case of the Netapp storage we have to use NFS.
I guess I should be sure that the aut
nt away when he
> used another operating system. We speculated it may have been that the
> Sunos4 NFS implementation wasn't returning the proper error code.
>
> I haven't seen anbody else report problems with rsync producing files of
> nulls, but it's pretty disconcertin
"Granzow, Doug (NCI)" wrote:
>
> From what I've observed by running top while rsync is running, its memory
> usage appears to grow gradually, not exponentially. A rsync may take
> several hours to complete. (I have one running now that started over four
> hours ago. The filesystem contains 236
Martin Pool wrote:
> Has anybody tried 2.5.5? Did it work well?
I ran it last night on about 400 trees. It seemed to timeout improperly.
I'll have more data on this tomorrow. Timeout was set at 1000 and it
seemed to timeout more than I've seen before (about 50 of the trees
reported timeouts).
I'm syncing from a linux box (NAS disk) to a sun (NAS disk).
I just found a file on the destination sun with zeros from bytes 8192 to
32767. (the source file had lots of 'good' random bytes). The rest of
the file compares properly. Repeatedly running rsync to send the file
didn't fix it. I ran a
ng: (near initialization for `long_options[71].val')
gcc -I. -I. -g -O2 -DHAVE_CONFIG_H -Wall -W -c zlib/inftrees.c -o
zlib/inftrees.o
In file included from zlib/inftrees.c:395:
zlib/inffixed.h:13: warning: missing braces around initializer
zlib/inffixed.h:13: warning: (near initialization for
`fixed
ng?
>
> 2.5.2 has some serious problems, Eric. Try the latest development
> snapshot at
> rsync://rsync.samba.org/ftp/unpacked/rsync/
> or
> ftp://rsync.samba.org/pub/unpacked/rsync/
>
> - Dave Dykstra
>
>
> On Wed, Feb 06, 2002 at 11:33:43AM -0700,
Make that 2 of us who need to specify a large timeout.
I have found that I have to set the timeout to a large value (1) to
get the rsyncs to run successfully. Leaving it at the default seemed to
cause timeout/hang problems. Of course I still running a 2.4.6dev
version. I had troubles with 2.
It seems similar to Ed Santiago's problem.
http://lists.samba.org/pipermail/rsync/2001-December/005511.html
This also seems similar to what I reported last month.
http://lists.samba.org/pipermail/rsync/2001-December/005628.html
I had to go back to 2.4.6dev+waynenohang. That works for me.
I run
Rsync is great. (What is cp? What is tar? Since I started using rsync I
forgot how to use those tools.) I sync over 1,000,000 files (50G) to
multiple destinations every day. Over medium speed links it takes about
3 hrs.
The default mode of operation creates a hidden (.file) on the
destination w
I have two rsync 2.5.1pre3 sessions hung right now. In this case it
appears that the processes at the destination have exited and the src is
still waiting for something.
Here are some more details:
SRC: solaris 2.7 (2G RAM 2 CPU)
DST: linux (one is 2.4.8, the other is 2.2.18)
Transport: ssh
I'm running 2.5.1pre3 and seeing lots hangs as well. Under
2.4.6+Waynes_nohang, I didn't have trouble this bad before.
SRC: solaris 2.7, netapps nfs tree
DST: solaris 2.8, linux 2.[2,4].*
TRANSPORT: ssh
This setup has worked well for months before the upgrade to 2.5.1.pre3.
I have not tried th
Martin Pool wrote:
>
> On 30 Nov 2001, Thomas J Pinkl <[EMAIL PROTECTED]> wrote:
> > I'm seeing:
> >
> > bit length overflow
> > code 4 bits 6->7
> >
> > in the output of rsync 2.5.0 between two Red Hat Linux systems.
> > One is RH 6.1 (kernel 2.2.19-6.2.1, glibc 2.1.3-22), the other
> > is R
2.5.0 -- thanks for doing the new release with all the fixes. It appears
to be working fine.
I'm seeing a 'bit length overflow' warning to STDERR. (with -vv) It
doesn't appear to be an error -- zlib/trees.c seems to indicate that
this is a situation that is properly handled. -vv in older version
Dave Dykstra wrote:
>
> Note that his case is rather pathological because he's got over a million
> files in only 400 directories, so he must have an average of over 2500
> files per directory, which are very large directories. He's got about 65%
> of the files explicitly listed in his --include-
What is holding up 2.4.7?
2.4.6 frequently has these hang problems that are fixed in the cvs tree.
But linux distros keep shipping 2.4.6 and users keep having troubles. I
think we need to release a rsync 2.4.7.
eric
Jessica Koeppel wrote:
>
> If I use -v or -vv, the rsyncs will hang forever
I have a home box that I was worried about it perhaps having a rootkit
on it.
I used a few rsync commands to compare a known good system to the
suspect one. I think rsync was a good tool for this. I still have to do
other checking for a potential hack problem, but rsync got me off to a
good start
Ben Ricker wrote:
>
> I am using Rsync between a Redhat Linux box and an AIX RS600. We have
> a about 30gb of database we need to sync to a backup server. Sounds
> good, right? The problem is that Rsync is so slow when we do the
> initial dump. We have files that are 1 - 5gb. It takes around 14-2
I'd like to have that feature available.
eric
Martin Pool wrote:
>
> > i didn't see that there was a way to tell rsync to change ownership and
> > group assignment on all files transferred to a specific user. meaning
> > "transfer all these files to this remote machine and then chown them to
Search the mailing list for references to Wayne's no-hang patches -- or
get pre1 out of the cvs server and the hangs should go away.
eric
Charles Sprickman wrote:
>
> Hi,
>
> I've just started playing around with rsync (latest - 2.4.6) over ssh
> (openssh 2.9p2) on FreeBSD to sync up home dir
Michelle/Tim,
I ocassionally get this message under 2.4.7pre1 -- I just call it a
network error and ignore it, but perhaps there is still a rsync protocol
problem. TCP is pretty good about retry/resend under bad network
conditions. Network hiccups shouldn't really be hitting us this often or
seve
There are 2 patches on this mailing list that fix the 2.4.6 hangs for
me. Check the archives and you should be able to find them.
I hope to see a 2.4.7 out soon that includes some or all of those hang
fixes.
eric
Steve Ladendorf wrote:
>
> I see there is quite a bit discussion on the list abo
Jurgen describes the behavior of some of my hangs -- as well as the
strace 'solution' to make it finish.
Wayne's patch fixed the hangs for me. Will this patch be incorporated
into the main tree?
eric
Wayne Davison wrote:
>
> On Wed, 13 Jun 2001, Jurgen Botz wrote:
> > I'm seeing the followin
John,
Wayne's fixes solved 2.4.6 hangs I had been seeing for a long time.
Actually I've been running 2.3.2 because the 2.4.6 hangs were killing
me... Now I'm slowly moving all my apps up to the 2.4.6+wayne patch
version of rsync. I hope his patch (or something similar) gets
included in the offici
I still have been unable to move to 2.4.6 for a similar reason --
hangs. I haven't ever detected the #files in a dir issue, but I do
still see hang problems in 2.4.6. (both solaris and linux) I see this
on localhost to localhost rsync's as well as rsyncs over ssh.
eric
[EMAIL PROTECTED] wrot
I have a similar setup and goal as Tim.
As another approach to the problem I've been pushing the sync's out to
the destinations rather than having the destinations pull the data. I
have 2G RAM on the source box (Solaris with a netapps disk) and I push
the data via rsync/ssh to destinations in par
I see the same problem on a linux box running without any network
involved. rsync -av dir1 dir2. (where dir[12] are both on local ide
disks)
I posted a few weeks ago regarding my 2.4.6 hang -- (not running
--deamon)
http://lists.samba.org/pipermail/rsync/2001-January/003552.html.
I found two '
You should be able to find 3 rsync processes. 1 on the source box and 2 on
the destination.
Do a truss on the pid of the 'hung' processes and report back what state
they are in.
Also a netstat -na showing the Queues on the socket (port 22) might be
helpful.
What version of rysnc? What version of
I have a setup that is showing very predictable rsync hangs.
SETUP
-
System: Linux 2.4.0+, rsync 2.4.6, 1Ghz T-bird (No OC), 256M, 45G IDE, 10G
IDE
Rsyncing an old 3.6G vfat partition from the 10G disk to the 45G disk. The
sync runs great until after it finishes a certain file. Same file ev
On Wed, Jan 24, 2001 at 11:59:18AM -0500, John Stoffel wrote:
>
>
> Hi all,
>
> This is a followup to bug report 2779 on the rsync bug tracking web
> site, I'm also seeing the hang in waitid() on the master process when
> trying to do an rsync on a single host.
> Thanks,
> John
>John St
It is time to recompile rsync with -g and wait for another hang.
eric
(gdb) attach 17407
Attaching to process 17407
(no debugging symbols found)...done.
0xff215d08 in _poll ()
(gdb) n
Single stepping until exit from function _poll,
which has no line number information.
0xff1ce6dc in select ()
(
More info/questions: (hung rsync is 17407)
# /usr/proc/bin/pstack 17407
17407: /usr/local/bin/rsync -rptgoD --partial --delete-after -vv
--delete -e
ff215d08 poll (ffbefad0, 0, 14)
ff1ce6d4 select (ffbefad0, ff2369f4, ff2369f4, ff2369f4, ff2369f4,
0) + 298
0001b568 msleep (14, ffbef
> Eventually, though, writefd_unbuffered() will be called. The last
> recv_generator() log message will vary depending on how many files
> need updates (i.e. how many bytes are written to the buffer). The
> call to writefd_unbuffered() will trigger a call to select() with both
> a write mask (wi
This might not be the info needed for proper debug, but it is a start.
I changed back to 2.4.6 last night and got a hang on a Solaris/Solaris
rsync. This doesn't look like a hang as much as it looks like the
receiving end just died. I usually run with a --timeout 3600 to keep
the rsync moving alon
I like your debug detail and methodology.
I have the same problem with 2.4.6. I'm running 2.3.2 now and it
doesn't have this 'hang' problem. I think there is something still
not quite right in 2.4.6.
eric
Simon Lai wrote:
>
> Hi,
>
> I'm using rsync to keep two local 30G disk partitio
I understand the log-format option for the rsync command:
--log-format=FORMAT log file transfers using specified format
I understand the log option for the rsyncd deamon.
log file
What I want to do is log from the rsync command (not deamon) to a
file. So far all I can get is my log fo
57 matches
Mail list logo