Re: Mini-HOWTO: Fixing rsync on Tiger (Mac OS X 10.4.x)

2006-03-22 Thread David Powell
It is just the performance issue.  However, this issue is unacceptable 
because of the number of files I am synching.. millions.. sending all of 
these resource forks is just asking for trouble...  using RsyncX I only 
transfer a few thousand files each session but this error just transfers 
resource forks 24 hours a day.. and this is insane.


I would gladly keep using RsyncX except it has a tendency to get Stuck 
on Stupid and worry over a missing file or some such nonsence rather 
than moving on.


I have found another project to fix rsync, but it still suffers from the 
same problem but he/they have implemented some novel solutions to the 
problem.  Still the binary at onthenet is easy to install and seems to 
work pretty well.


http://www.onthenet.com.au/~q/rsync/

Q. wrote to me..

I know in your notes you say this should only occur when the last  
change time was more recent than the file's modification date, but  
the net effect is that every resource copies every sync.


Is this what you are talking about when you are referencing  
getattrlist?  This seems more like spotlight data.. but perhaps I  am 
wrong.


Unfortunately yes. When you use the unix stat() function call it will  
return an access time, a change time and a modification time. When  you 
modify the extended attributes of a file it will changes the  change 
time without touching the modification time. Unfortunately  the unix 
function call for setting these times doesn't allow you to  set the 
change time, and will instead set it to the current time,  hence the 
problem.


It appears that setattrlist() will allow the setting of a files  change 
time on a HFS+ volume which would eliminate this issue,  however it's 
syntax is somewhat complex, and I have not yet had a  chance to work out 
how to use it as my powerbook is currently in for  repairs.  So it might 
be a little while before I have this problem  fixed.


---

Appearantly the RsyncX people uses Apple specific Carbon library calls 
to read and write data to accomplish not copying the resources.. 


David


J.D. Bakker wrote:


At 09:54 -0500 01-12-2005, David Powell wrote:

... ._resource forks being synced over and over despite the target 
file itself not moving..



Is this Problem #4 mentioned in http://www.lartmaker.nl/rsync/ , or 
something else ?



Missing fact...

I am attempting a link-destination based multiple backup that creates 
hard links to files that are duplicates and only moves the changed 
files..  This implementation of rsync is not noticing the existing 
resource fork associated with the previous copy of the file so it 
moves the resource fork again.


I hope this makes sense.



It would help to know if you're doing a local copy or a networked one. 
Could you show us the full rsync command used ?


Are you experiencing data loss, or is this simply a performance issue ?

Regards,

JDB.



--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Open file excluded in rsync backup

2006-03-22 Thread IMCC
hi  every one


i dont know whether rsync excludes the open file  while taking backup or
not.?

My problem is how to handle open file backup(the files which are open by
any process ) with rsync .?

Thanks 
Harshal
SPsoft


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Open file excluded in rsync backup

2006-03-22 Thread Tevfik Karagülle

A recipe for Windows 2003 systems (contributed by Rob Bosch) can be found at

http://www.itefix.no/phpws/index.php?module=faqFAQ_op=viewFAQ_id=81

Rgrds Tev

 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On 
 Behalf Of IMCC
 Sent: 22. mars 2006 13:27
 To: rsync@samba.org
 Subject: Open file excluded in rsync backup
 
 hi  every one
 
 
 i dont know whether rsync excludes the open file  while 
 taking backup or not.?
 
 My problem is how to handle open file backup(the files which 
 are open by any process ) with rsync .?
 
 Thanks
 Harshal
 SPsoft
 
 
 -- 
 


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


exclude open file in backup with rsync

2006-03-22 Thread IMCC
hi,
   I am working on Backup appliction.
   So I am understanding  rsync_2.6.6.

   I have to take a backup of files on a drive But 
   I  have to exclude those file in a drive which are open. 

   I want to know  whether rsync exclude open files
   while taking backup ? .

   and how rsync identify a file  open or not.
   what test on a file rsync does.

  sunil
 SPSOFT
   
 

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync 4TB datafiles...?

2006-03-22 Thread lsk


lsk wrote:

But I have tried various options including --inplace,--no-whole-file etc.,
for last few weeks but all the results show me removing the destination
server oracle datafiles and after that doing an rsync -vz from source is
faster than copying(rsyncing) over the old files that are present in
destination.
  

Please do try applying the patch in patches/dynamic_hash.diff to both
sides (well, it's probably only necessary for the sending machine, but
no matter) and making this check again. This patch is meant to address
precisely your predicament.

  Shachar
-- 
/// lsk  Hi which issue does this patch address ? it is for -- inplace 
or for ,--no-whole-file transfer of oracle datafiles. 

Where can I get this patch ?

Thanks,
lsk.
--
View this message in context: 
http://www.nabble.com/Rsync-4TB-datafiles...--t1318624.html#a3534475
Sent from the Samba - rsync forum at Nabble.com.

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync 4TB datafiles...?

2006-03-22 Thread lsk

Also I use the rsync version rsync  version 2.6.5  protocol version 29 does
this version include this patch dynamic_hash.diff or do we need to
install it seperately.

At destination server I use rsync  version 2.6.6  protocol version 29
anyhow u said that doesn't matter.

Thanks,
lsk.
--
View this message in context: 
http://www.nabble.com/Rsync-4TB-datafiles...--t1318624.html#a3534654
Sent from the Samba - rsync forum at Nabble.com.

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Multiplexing overflow

2006-03-22 Thread Wayne Davison
On Fri, Mar 17, 2006 at 07:16:19PM +0300, ? ??? wrote:
 multiplexing overflow 101:7104843 [receiver]

This indicates that the protocol got corrupted somehow because that is
saying that the receiver got a MSG_DELETED message with a 7104843-byte
buffer attached (which rsync is telling you is too big), but you're not
even deleting files.  What version of rsync are you using?

..wayne..
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Multiplexing overflow

2006-03-22 Thread Косов Евгений

I'm using rsync v.2.6.6 (with protocol version 29).

Wayne Davison wrote:

On Fri, Mar 17, 2006 at 07:16:19PM +0300, ? ??? wrote:

multiplexing overflow 101:7104843 [receiver]


This indicates that the protocol got corrupted somehow because that is
saying that the receiver got a MSG_DELETED message with a 7104843-byte
buffer attached (which rsync is telling you is too big), but you're not
even deleting files.  What version of rsync are you using?

...wayne..

--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Improved diagnostics patch

2006-03-22 Thread Wayne Davison
On Mon, Feb 20, 2006 at 08:03:30PM +0700, David Favro wrote:
 Here's a small patch that gives better diagnostics on the daemon side
 if it fails to start up due to inability to create or bind the socket.

Thanks for the patch.  I've considered it and your misgivings, and have
come up with the attached revised version that only outputs the extra
socket()/bind() failure messages in the case of a total failure, or when
the daemon was run with at least -vv.  See what you think.

..wayne..
--- socket.c7 Mar 2006 08:46:29 -   1.118
+++ socket.c22 Mar 2006 17:47:03 -
@@ -333,9 +333,9 @@ static int *open_socket_in(int type, int
   int af_hint)
 {
int one = 1;
-   int s, *socks, maxs, i;
+   int s, *socks, maxs, i, ecnt;
struct addrinfo hints, *all_ai, *resp;
-   char portbuf[10];
+   char portbuf[10], **errmsgs;
int error;
 
memset(hints, 0, sizeof hints);
@@ -353,17 +353,25 @@ static int *open_socket_in(int type, int
/* Count max number of sockets we might open. */
for (maxs = 0, resp = all_ai; resp; resp = resp-ai_next, maxs++) {}
 
-   if (!(socks = new_array(int, maxs + 1)))
+   socks = new_array(int, maxs + 1);
+   errmsgs = new_array(char *, maxs);
+   if (!socks || !errmsgs)
out_of_memory(open_socket_in);
 
/* We may not be able to create the socket, if for example the
 * machine knows about IPv6 in the C library, but not in the
 * kernel. */
-   for (resp = all_ai, i = 0; resp; resp = resp-ai_next) {
+   for (resp = all_ai, i = ecnt = 0; resp; resp = resp-ai_next) {
s = socket(resp-ai_family, resp-ai_socktype,
   resp-ai_protocol);
 
if (s == -1) {
+   int r = asprintf(errmsgs[ecnt++],
+   socket(%d,%d,%d) failed: %s\n,
+   (int)resp-ai_family, (int)resp-ai_socktype,
+   (int)resp-ai_protocol, strerror(errno));
+   if (r  0)
+   out_of_memory(open_socket_in);
/* See if there's another address that will work... */
continue;
}
@@ -385,6 +393,10 @@ static int *open_socket_in(int type, int
/* Now we've got a socket - we need to bind it. */
if (bind(s, resp-ai_addr, resp-ai_addrlen)  0) {
/* Nope, try another */
+   int r = asprintf(errmsgs[ecnt++],
+   bind() failed: %s\n, strerror(errno));
+   if (r  0)
+   out_of_memory(open_socket_in);
close(s);
continue;
}
@@ -396,6 +408,15 @@ static int *open_socket_in(int type, int
if (all_ai)
freeaddrinfo(all_ai);
 
+   /* Only output the socket()/bind() messages if we were totally
+* unsuccessful, or if the daemon is being run with -vv. */
+   for (s = 0; s  ecnt; s++) {
+   if (!i || verbose  1)
+   rwrite(FLOG, errmsgs[s], strlen(errmsgs[s]));
+   free(errmsgs[s]);
+   }
+   free(errmsgs);
+
if (!i) {
rprintf(FERROR,
unable to bind any inbound sockets on port %d\n,
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Rsync 4TB datafiles...?

2006-03-22 Thread Linus Hicks

Paul Slootman wrote:

On Tue 21 Mar 2006, lsk wrote:


I don't know how it would work if we do rsync with the files--from option ?


I'm not sure how rsync behaves when confronted with a network problem
during a session, so I won't give an answer to that.
However, doing individual files sounds reasonable, so make it a loop:

 dbf-list while read filename; do rsync -vz $filename destser:$filename done


Also rsync gurus would you suggest which is the fastest way to trasfer this
4 TB data ? Any suggestions...would be of great help.


I'd recommend doing --inplace, as chances are that data won't move
within a file with oracle data files (so it's not useful to try to find
moved data), and copying the 4TB to temp. files every time could become
a big timewaster. Also the -t option could be handy, not all files
change all the time IIRC.


The above remark about not being useful to try to find moved data provoked an 
idea. But my understanding of --inplace is apparently different from yours. I 
thought --inplace only meant that the destination file would be directly 
overwritten, not that it would turn off any of the optimizations for finding 
moved data.


It would be useful (I think) on a fast network to be able to turn off those 
optimizations, and only compare blocks located at the same offset in source and 
destination. If that is not how --inplace works, I wonder if that would be a 
performance win.


Linus
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync 4TB datafiles...?

2006-03-22 Thread Shachar Shemesh
lsk wrote:

Also I use the rsync version rsync  version 2.6.5  protocol version 29 does
this version include this patch dynamic_hash.diff or do we need to
install it seperately.
  

Sorry. You will need to get the 2.6.7 sources, and then apply the patch
yourself and compile rsync.

Please do report back here your results. This patch is a result of a lot
of theoretical work, but we never got any actual feedback on it.

   Shachar
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


connection unexpectedly closed while transfering a 133M file

2006-03-22 Thread Eugene Kosov

Hi, All!

I've got some errors while trying to sync 2 directory trees.



[EMAIL PROTECTED] ~] rsync -azv way:/www/sin/ /www/sin/
receiving file list ... done
./
[snip]
docs/members/popol/pop0207.con
rsync: connection unexpectedly closed (275727980 bytes received so far) 
[receiver]

rsync error: error in rsync protocol data stream (code 12) at io.c(443)


[EMAIL PROTECTED] ~] rsync --version
rsync  version 2.6.6  protocol version 29
[snip]
Capabilities: 64-bit files, socketpairs, hard links, symlinks, batchfiles,
  inplace, IPv6, 32-bit system inums, 64-bit internal inums

[EMAIL PROTECTED] ~] uname -smr
FreeBSD 4.11-STABLE i386

[EMAIL PROTECTED] ~] ssh way du -h -d 0 /www/sin/
838M/www/sin/


Size of docs/members/popol/pop0207.con is about 133M. I think this 
should not be a problem for rsync. I've tried to google error message 
but with no luck...


Can anyone tell me what's wrong?

Any help will be greatly appreciated.

--
Regards,
Eugene Kosov
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: installing/updating rsync 2.6.7 on os x

2006-03-22 Thread Banana Flex

don't update rsync on a Mac OS X system
because you don't have the -E (--extended-attributes) with the  
release of rsync-2.6.7 from source
this is important, you don't have any chance to copy a meta data of a  
file if you use any other release of rsync


the version of rsync included in the system as been modified by Apple

if you want a fresh copy of rsync, use this configuration:

banana:~ banana$ ./configure --srcdir=. --prefix=/usr --exec-prefix=/ 
usr --bindir=/usr/bin --datadir=/usr/share --sysconfdir=/etc -- 
localstatedir=/var --libdir=/usr/lib --includedir=/usr/include -- 
mandir=/usr/man --build=powerpc-apple-darwin8 --host=powerpc-apple- 
darwin8 --target=powerpc-apple-darwin8


banana:~ banana$ make
banana:~ banana$ sudo make install

good luck

banana


On Mar 15, 2006, at 6:58 PM, Simon Margulies wrote:


I'm trying to update my rsync on os x 10.4.5.

downloaded the latest rsync-2.6.7.tar.gz
as described in the configure --help I started ./configure and the  
output tells me that all works out fine.


then running rsync (in /usr/local/bin) in bash the output still  
tells me, it would be: rsync  version 2.6.0  protocol version 27



what am I doing wrong?




thanks


Simon
--
To unsubscribe or change options: https://lists.samba.org/mailman/ 
listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart- 
questions.html


--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: exclude open file in backup with rsync

2006-03-22 Thread Matt McCutchen
On Wed, 2006-03-22 at 19:27 +0530, IMCC wrote:
I have to take a backup of files on a drive But 
I  have to exclude those file in a drive which are open. 
 
I want to know  whether rsync exclude open files
while taking backup ? .

Rsync sends all files without checking whether they are open.  If a file
is being read concurrently, rsync will send it correctly.  If a file is
being written concurrently, rsync will transfer each piece of the file
as it existed at the moment rsync read that piece.  That means you can
potentially get a combination of new and old data.  _If_ the concurrent
writing continues after the second during which rsync stats the source
file, the source file will be newer than the destination file and rsync
will fix the destination file when you next run it.

You might be able to get your operating system to enforce mandatory
locking on the files in question, in which case if the writer takes a
write lock, rsync's reading will be delayed until the writing finishes.
To simply exclude open files, you could run fuser on all the files and
translate the results into exclude rules, but files might open and close
after rsync starts.
-- 
Matt McCutchen
[EMAIL PROTECTED]
http://hashproduct.metaesthetics.net/

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Rsync help needed...

2006-03-22 Thread lsk


 And a performance question: would it be faster to pass the complete list
 of 
 datafiles to rsync in one fell swoop, for instance using --files-from
 rather 
 than running rsync individually on each one?

It would be somewhat faster to pass the entire list because you incur
the overhead of setting up the rsync process triangle once, not for
every file.  Furthermore, the rsync protocol is pipelined.  If you have
a network with high bandwidth but considerable latency, calling rsync
once will take advantage of the pipelining while calling it for each
file will wait for several network round trips per file.
-- 
Matt McCutchen

// lsk- I would like to do rsync from a disk level by using
--files-from=FILE option. But the problem is what will happen if the network
connection fails the whole rsync will fail right. One advantage that I have
if do rsync file by file is if the network connection fails and comes back
alive the rsync will continue from the next file in the list while it will
fail on the other files when connection is lost.

Matt do you know how rsync would behave when I do rsync at disk level
traansferring all files using --files-from option and network connection
fails. I need to trransfer files that are on 40 drives abt (16000 files) I
am trying to find the best possible solution to transfer it as fast i can
without any issues.

Thanks,
lsk.


--
View this message in context: 
http://www.nabble.com/Rsync-help-needed...-t1170765.html#a3544207
Sent from the Samba - rsync forum at Nabble.com.

-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


error in protocol stream

2006-03-22 Thread David Bear
I am trying to use the syntax:

rsync -av -e ssh -l ssh-user [EMAIL PROTECTED]::module /dest

found at http://rsync.samba.org/ftp/rsync/rsync.html

but am getting the following:

rsync -av -e ssh -l sshuser rhost.asu.edu::home uc-sirc1/home/
rsync: connection unexpectedly closed (0 bytes received so far)
[receiver]
rsync error: error in rsync protocol data stream (code 12) at
io.c(434)

There is an rsync daemon running on rhost.asu.edu and that has a
module named home. This is a file server box with only two interactive
logins allow, both administrative, so there is no rsync username
required to connect to the module.

I have public key auth setup with rhost as well -- so ssh works
without any password prompting.

I have also tried the syntax:

rsync -av --rsh=ssh rhost.asu.edu::home uc-sirc1/home/
rsync: connection unexpectedly closed (0 bytes received so far)
[receiver]
rsync error: error in rsync protocol data stream (code 12) at
io.c(434)

but you see the result is the same.

I have tried using the tunnell method described at
http://www.samba.org/rsync/firewall.html

using syntax like this:
ssh -fN -l middle_user -L 8873:target:873 middle

except that I don't have a middle system. My script does this:

/usr/bin/ssh -fN -L 8730:localhost:873 [EMAIL PROTECTED]
/usr/bin/rsync -av rsync://localhost:8730/home uc-sirc1/home/
/bin/kill -9 `/usr/bin/pgrep -lf -u natjohn1 ssh | grep [EMAIL PROTECTED] | 
cut -f1 -d  `

This works... except that for some reason the rsync daemon on the
remote host dies --- occasionally... 

-- 
David Bear
phone:  480-965-8257
fax:480-965-9189
College of Public Programs/ASU
Wilson Hall 232
Tempe, AZ 85287-0803
 Beware the IP portfolio, everyone will be suspect of trespassing
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


feature request: generate symlink shadow tree

2006-03-22 Thread russell
I was wondering if anyone has used rsync to generate forests of
symlinks which point
to a specified hierarchy.

for background see;

http://groups.google.com/group/comp.lang.perl.misc/browse_thread/thread/91598a037e22ce8f/d655d9b6bfe36beb?lnk=stq=spectral+forest+symlinksrnum=1#d655d9b6bfe36beb

It would be nice to see this feature in rsync, specifically I'd like
to use it with a --delete so stale symlinks would be pruned in the
generated forest.

regards,
Russell.
--
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Does rsync implementfile locking mechanism

2006-03-22 Thread imcc








hi,

 I am currently working on backup
application.

 So I am understanding rsync_2.6.6.

 As rsync take backup of all files whether
open or not.

 

 I have to know whether rsync implement
file locking 

 mechanism that is shared and exclusive
locks while 

 opening file.

 

 

 

Thanks,

 Sunil

 Spsoft 

 

 






-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

CVS update: rsync/patches

2006-03-22 Thread Wayne Davison

Date:   Wed Mar 22 09:21:46 2006
Author: wayned

Update of /data/cvs/rsync/patches
In directory dp.samba.org:/tmp/cvs-serv26927

Modified Files:
acls.diff 
Log Message:
Improved the opening comments.


Revisions:
acls.diff   1.114 = 1.115

http://www.samba.org/cgi-bin/cvsweb/rsync/patches/acls.diff?r1=1.114r2=1.115
___
rsync-cvs mailing list
rsync-cvs@lists.samba.org
https://lists.samba.org/mailman/listinfo/rsync-cvs


CVS update: rsync/patches

2006-03-22 Thread Wayne Davison

Date:   Wed Mar 22 09:24:41 2006
Author: wayned

Update of /data/cvs/rsync/patches
In directory dp.samba.org:/tmp/cvs-serv28788

Modified Files:
adaptec_acl_mods.diff 
Log Message:
Fixed a failing hunk.


Revisions:
adaptec_acl_mods.diff   1.4 = 1.5

http://www.samba.org/cgi-bin/cvsweb/rsync/patches/adaptec_acl_mods.diff?r1=1.4r2=1.5
___
rsync-cvs mailing list
rsync-cvs@lists.samba.org
https://lists.samba.org/mailman/listinfo/rsync-cvs


CVS update: rsync

2006-03-22 Thread Wayne Davison

Date:   Wed Mar 22 17:48:59 2006
Author: wayned

Update of /data/cvs/rsync
In directory dp.samba.org:/tmp/cvs-serv8518

Modified Files:
socket.c 
Log Message:
If open_socket_in() fails, we now log the reasons why.


Revisions:
socket.c1.118 = 1.119
http://www.samba.org/cgi-bin/cvsweb/rsync/socket.c?r1=1.118r2=1.119
___
rsync-cvs mailing list
rsync-cvs@lists.samba.org
https://lists.samba.org/mailman/listinfo/rsync-cvs