Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Jim Salter
No.  Rsync has to build a list of *every single file in the filesystem*, 
along with - at a minimum - last modified datestamp.  Rsync needs to be 
able to propagate deletions if necessary, and to do that, you have to 
compare the entire list of all files (not specifically excluded) in the 
specified section on both volumes, not just the changed ones.

Also keep in mind that while if you're using the find / -ctime trick, 
YOU can assume that the other end is a mirror and you know exactly when 
the last time it was synchronized was, rsync knows no such thing.  Rsync 
 *produces* a mirror whether or not it started out with one.

Jim Salter

But isn't building this exact file list what an ordinary call to rsync 
is supposed to do (when not forcing checksum calculation)? So why is 
rsync so much slower than find?

/Greger

Tim Conway wrote:

Good idea
find / -ctime -1h |rsync -a --files-from=- / destination
No perl needed.  You might want mtime instead, though.
 

 

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Greger Cronquist
But isn't building this exact file list what an ordinary call to rsync 
is supposed to do (when not forcing checksum calculation)? So why is 
rsync so much slower than find?

/Greger

Tim Conway wrote:

Good idea
find / -ctime -1h |rsync -a --files-from=- / destination
No perl needed.  You might want mtime instead, though.
 

 

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Why does my cwrsync try to load ssh?

2004-03-22 Thread Jim Salter
It's trying to fire up ssh because your target has a single colon in it.

server:/path/to/stuff means "fire up ssh or rsh and make me a tunnel to 
server"

server::module/path/to/stuff means "try to access an rsync daemon on 
server and access path/to/stuff on module"

/path/to/stuff means "access /path/to/stuff on the local machine"

Hope that helps.

Jim Salter

Just installed cwrsync 1.2.1 and I am getting this:

C:\cwrsync>rsync -n -v -r /cygdrive/c/robj/pickmeup 
speedball3:/cygdrive/d/robj/pickmeup
Failed to exec ssh : No such file or directory
rsync error: error in IPC code (code 14) at pipe.c(81)
rsync: connection unexpectedly closed (0 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(189)

Why in heaven's name is rsync trying to exec ssh when I am asking it to 
do nothing of the kind?

I've googled and searched the list archives and have found no mention of 
this particular problem :-(  I have removed all other traces of cygwin 
from this system.  At one point I think I did experiment with OpenSSH 
for Windows on this machine but I can find no remaining trace of that 
either.

All help appreciated and sorry for the newbie idiocy :-(
Cheers
Rob
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Why does my cwrsync try to load ssh?

2004-03-22 Thread Rob Jellinghaus
Just installed cwrsync 1.2.1 and I am getting this:

C:\cwrsync>rsync -n -v -r /cygdrive/c/robj/pickmeup 
speedball3:/cygdrive/d/robj/pickmeup
Failed to exec ssh : No such file or directory
rsync error: error in IPC code (code 14) at pipe.c(81)
rsync: connection unexpectedly closed (0 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(189)

Why in heaven's name is rsync trying to exec ssh when I am asking it to do 
nothing of the kind?

I've googled and searched the list archives and have found no mention of 
this particular problem :-(  I have removed all other traces of cygwin from 
this system.  At one point I think I did experiment with OpenSSH for 
Windows on this machine but I can find no remaining trace of that either.

All help appreciated and sorry for the newbie idiocy :-(
Cheers
Rob
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: orphan dirs and files with --delete

2004-03-22 Thread Eric Whiting
I still get the same error with --force --delete.

There needs to be a chmod on dir before the files and dir can be deleted.

eric




Tim Conway wrote:
> 
>   --force force deletion of directories even if
> not empty
> 
> SunOS 5.8   Last change: 26 Jan 20037
> 
> User Commandsrsync(1)
> 
> That should do it.
> 
> Tim Conway
> Unix System Administration
> Contractor - IBM Global Services
> desk:3032734776
> [EMAIL PROTECTED]
> 
> rsync (2.5.[67]) --delete fails on dirs with the w bit cleared. (example
> below)
> Rsync will sync a dir with w bit clear, but will not remove it with
> --delete.
> 
> This is not a big problem, but it will create situations where there are
> 'orphaned' files.
> 
> Has anyone else had this problem?
> 
> It looks like a change would be needed in robust_unlink (util.c). This
> function
> would have to do a chmod on dirs that are locked down before it does the
> unlink.
> (syncing as user root doesn't have this problem)
> 
> --
> To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
> Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Clint Byrum
On Mon, 2004-03-22 at 17:16, Jim Salter wrote:
> But why would you want to use rsync if you've already built your file 
> list?  Seems kinda pointless... I mean if it got touched, you definitely 
> want to copy it, so, yeah. =)
> 

And so it seems we've come full circle back to just use tar. ;-)


-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Jim Salter
But why would you want to use rsync if you've already built your file 
list?  Seems kinda pointless... I mean if it got touched, you definitely 
want to copy it, so, yeah. =)

Jim Salter

Good idea
find / -ctime -1h |rsync -a --files-from=- / destination
No perl needed.  You might want mtime instead, though.
Tim Conway
Unix System Administration
Contractor - IBM Global Services
desk:3032734776
[EMAIL PROTECTED]


Jim Salter <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED]
03/22/2004 04:19 PM

To
[EMAIL PROTECTED]
cc
Subject
Re: Long time needed for "Building file list" Any suggestions ?






This does bring up one point though. Is there any way to optimize file
list building? It seems like that turns into a huge bottleneck in the
"lots of files" situation.


If you already know you're working with a mirror on the other end, and 
you know when your last sync was, and you're a moderately decent Perl 
hacker, you can pretty easily hack together a script that will take the 
output of something like

find / -ctime -1h



--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Tim Conway
Good idea
find / -ctime -1h |rsync -a --files-from=- / destination
No perl needed.  You might want mtime instead, though.

Tim Conway
Unix System Administration
Contractor - IBM Global Services
desk:3032734776
[EMAIL PROTECTED]




Jim Salter <[EMAIL PROTECTED]> 
Sent by: [EMAIL PROTECTED]
03/22/2004 04:19 PM

To
[EMAIL PROTECTED]
cc

Subject
Re: Long time needed for "Building file list" Any suggestions ?






> This does bring up one point though. Is there any way to optimize file
> list building? It seems like that turns into a huge bottleneck in the
> "lots of files" situation.

If you already know you're working with a mirror on the other end, and 
you know when your last sync was, and you're a moderately decent Perl 
hacker, you can pretty easily hack together a script that will take the 
output of something like

find / -ctime -1h



-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: orphan dirs and files with --delete

2004-03-22 Thread Tim Conway
  --force force deletion of directories even if 
not empty

SunOS 5.8   Last change: 26 Jan 20037

User Commandsrsync(1)

That should do it.

Tim Conway
Unix System Administration
Contractor - IBM Global Services
desk:3032734776
[EMAIL PROTECTED]





rsync (2.5.[67]) --delete fails on dirs with the w bit cleared. (example 
below)
Rsync will sync a dir with w bit clear, but will not remove it with 
--delete. 

This is not a big problem, but it will create situations where there are
'orphaned' files.

Has anyone else had this problem? 

It looks like a change would be needed in robust_unlink (util.c). This 
function
would have to do a chmod on dirs that are locked down before it does the 
unlink.
(syncing as user root doesn't have this problem)




-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


orphan dirs and files with --delete

2004-03-22 Thread Eric Whiting
rsync (2.5.[67]) --delete fails on dirs with the w bit cleared. (example below)
Rsync will sync a dir with w bit clear, but will not remove it with --delete. 

This is not a big problem, but it will create situations where there are
'orphaned' files.

Has anyone else had this problem? 

It looks like a change would be needed in robust_unlink (util.c). This function
would have to do a chmod on dirs that are locked down before it does the unlink.
(syncing as user root doesn't have this problem)

The CHECK_RO macro in syscall.c only checks for file being RO. It doesn't check
for the dir being RO.

eric

here is an example:


COMMANDS:
--
cd /tmp

# cleanup 
chmod -R a+w source dest
rm -rf source dest

# create a dir and subdir and chmod
mkdir source
cd source
touch file1 file2
mkdir dir1;touch dir1/file3 dir1/file4
chmod a-w dir1

# rsync to dest
mkdir /tmp/dest
rsync --delete -av /tmp/source/ /tmp/dest

# clean up source a little bit
chmod a+w dir1
rm -rf dir1

# attempt to clean up dest with rsync (this --delete will fail)
rsync --delete -av /tmp/source/ /tmp/dest
cd /tmp

OUTPUT (of final rsync)
---
/tmp/source> rsync --delete -av /tmp/source/ /tmp/dest
building file list ... done
delete_one: unlink dir1/file4: Permission denied
delete_one: unlink dir1/file3: Permission denied
./
wrote 102 bytes  read 20 bytes  244.00 bytes/sec
total size is 0  speedup is 0.00
rsync error: some files could not be transferred (code 23) at main.c(620)
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Jim Salter
This does bring up one point though. Is there any way to optimize file
list building? It seems like that turns into a huge bottleneck in the
"lots of files" situation.
If you already know you're working with a mirror on the other end, and 
you know when your last sync was, and you're a moderately decent Perl 
hacker, you can pretty easily hack together a script that will take the 
output of something like

find / -ctime -1h

and use it to just do a straight copyover of all files that have been 
modified on the primary machine since the last synchronization.

For reference, on several servers I admin with anywhere from 60GB to 
200GB worth of data on them, it takes less than 5 seconds to generate a 
list of changed files using the find command as shown above, under most 
server load conditions.  (Also for reference this is with various 
versions of FreeBSD from 4.9 to 5.1.)

What that *won't* do is get rid of any files that have been deleted 
since the last time you sync'ed.  But to for instance a for instance, I 
sometimes use a little Perl hack like that to bounce the major changes 
frequently during the day, then use rsync once daily during downtime 
around 1AM to catch anything my "bounces" missed (like deleting files).

Jim Salter
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Mark Thornton
Clint Byrum wrote:

This does bring up one point though. Is there any way to optimize file
list building? It seems like that turns into a huge bottleneck in the
"lots of files" situation.
 

Only by having a process which continuously monitors the relevant 
directory trees to maintain a list of the changed files.
In the case of Windows perhaps using the ReadDirectoryChangesW method.
http://msdn.microsoft.com/library/en-us/fileio/base/readdirectorychangesw.asp?frame=true

Otherwise scanning a large directory tree for changes is unavoidably 
slow. (At least it is only quick if you have very recently scanned the 
same tree and thus have all the directory data cached in memory.)

Mark Thornton

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync returned 23

2004-03-22 Thread Ian Clancy
Hello Rsync users,

I,m using rsnapshot, an incremental backup too based on rsync to backup data
from remote servers using ssh. The data mainly consists of windows shares
(samba & windows NT4 Server). Most of the shares backup ok but when backing
up some shares the following error is recorded in  
/var/log/rsnapshot :

[20/Mar/2004:01:17:45] ERROR: /usr/local/bin/rsync returned 23

Once this error has been recieved the backup stops, the data having been
partially backed up. I cant understand this because other times the backup
will work correctly. This does seem to happen more often on NT4 Server
though.

I am using the following parameters with rsync:

-rtlD --delete --numeric-ids

When googling for a solution i found the folling link :

http://lists.samba.org/archive/rsync/2003-August/006812.html

Which states the " rsync returns 23 (RERR_PARTIAL) when a file has been
deleted after the list has been created " . This does not make much sence to
me but maybe someone else can make sense of it.

Any help/comments please ?.
Thank you.

Ian

Legal Disclaimer: Any views expressed by the sender of this message are
not necessarily those of Connaught Electronics Ltd. Information in this 
e-mail may be confidential and is for the use of the intended recipient
only, no mistake in transmission is intended to waive or compromise such 
privilege. Please advise the sender if you receive this e-mail by mistake.



-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync via ssh script

2004-03-22 Thread Tim Nonnast
Hi,

does anybody know, how a bash shell script looks, which automatically 
enters the ssh password?

the rsync call should be:
rsync -avz -e ssh /home/johndoe/data/repository 
[EMAIL PROTECTED]:/home/johndoe/junk

the call causes a password question.

Is ist possible to automate that by a cron? If yes how should the script 
look to automatically enter the password?

ThanX for an answer.

Cheers,
Tim
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Clint Byrum
On Mon, 2004-03-22 at 04:42, Hergaarden, Marcel wrote:
> We're running rsync 2.5.7 on a Windows2000 server, in combination with
> cygwin/ssh. The server who receives the data is a Linux server.
> 
> The amount of data from the Windows server is about 100 Gb. Represented
> by 532.000 files of different nature. Mostly doc, ppt and xls files.
> 
> It takes about 2 hours to create only the file list. 
> 
> Is the amount of data/files to big, should I segmentate the backupfiles
> or is something else the cause of this long duration.
> 

2 hours for 530,000+ files sounds about right to me. 2 things..

1) how much does the data change per day?
2) how fast is the network link between the two boxes?

We had a situtation recently where a backup via rsync that used to take
1 hour total, suddenly ballooned to 3.5 hours. This wasn't acceptable as
it was loading the server down. We had recently upgraded to gigabit
ethernet, so we were a bit perplexed. Then we realized that the number
of files being rsynced had gone up by a factor of 5.

We switched to just doing a simple tar backup of all the files. It only
takes an hour again. rsync is great (REALLY GREAT), but remember, its
mostly about maximizing bandwidth by only sending whats changed. If 80%
of your data changes every day, and you have a gigabit link... rsync
isn't really meant for that anyway.

This does bring up one point though. Is there any way to optimize file
list building? It seems like that turns into a huge bottleneck in the
"lots of files" situation.

-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Can tar be piped through rsync?

2004-03-22 Thread prosolutions
I am trying to work out a backup solution whereby an entire Linux drive can
be rsynced to a mounted external firewire/usb hard drive that has a vfat
filesystem on it.

To preserve ownership and permissions it is necessary to do something like 
tar -cf dir1.tar dir1/ | rsync /win/rsync 

however this command does not work.  Does anyone know if it is even possible
to do such a thing?  Also, if it is possible, will it be possible to use
rsync again to incrementally update dir1.tar on the vfat filesystem?





--
Daniel

-- 
+++ NEU bei GMX und erstmalig in Deutschland: TÜV-geprüfter Virenschutz +++
100% Virenerkennung nach Wildlist. Infos: http://www.gmx.net/virenschutz

-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Odd behavior with rsync/ssh/--delete

2004-03-22 Thread Wayne Davison
On Sun, Mar 21, 2004 at 09:51:18PM -0700, Peter Wargo wrote:
> However, my syncs are much bigger then they should be.  I'm getting a
> bunch more than I expect - files that haven't changed in a long time
> are being deleted and re-sync'd.

Does the destination system have any mounted filesystems inside the
single mount from the source system?  Try dropping the -x and using one
or more --exclude directives for the mount point(s) that should be
skipped.

Can you reproduce the problem with a smaller file set?  Try transferring
just one home dir that had a problem and see if that still fails.  Use
- so you can see the file list from both sides of the transfer
(which is why I recommend trying this with a smaller file set -- it
generates a lot of output).  There will be two dumps of the file list
from the receiving side.  The first is the list it received from the
sender, and the second is the list that it created to look for
deletions.  Look for any unexpected differences between those two lists.
Rsync will delete anything that is extra in the second list, even if it
is just out of order (which it shouldn't be).

..wayne..
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


remote site deletes need to propagate back to master site.

2004-03-22 Thread John Sharp
Hi, I've just started with rsync and have it running in
daemon mode on my remote site.  I would like the remote
site to be able to delete directories and have these
deletes "reflected" back to the "master" site. 

Reading thru the docs, its says this is possible, but I can't
figure out how to set this up.

My remote site is ruunning with 
./rsync --config=rsyncd.conf -vvv --port=8090 --daemon
##
max connections = 25
motd file = /tmp/rsync/log/rsyncd.log
log file = /tmp/rsync/log/rsyncd.log
pid file = /tmp/rsync/log/rsyncd.pid
lock file = /tmp/rsync/log/rsync.lock
use chroot = no

[ftp]
comment = ftp area
path = /CDISK1/TEMP/FUSION
read only = no
list = yes
hosts allow = inverness.ti.com



-- 
 Texas Instruments Inc  Product Developmenttel:214 480 4253
 PO BOX 660199  MS: 8645 FORE/A-3101   fax:214 480 4401
 12500 TI Boulevard Dallas, TX 75266-0199   
  Generally, carts are easier to steer once they are moving.
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Long time needed for "Building file list" Any suggestions ?

2004-03-22 Thread Hergaarden, Marcel


We're running rsync 2.5.7 on a Windows2000 server, in combination with
cygwin/ssh. The server who receives the data is a Linux server.

The amount of data from the Windows server is about 100 Gb. Represented
by 532.000 files of different nature. Mostly doc, ppt and xls files.

It takes about 2 hours to create only the file list. 

Is the amount of data/files to big, should I segmentate the backupfiles
or is something else the cause of this long duration.

Lookin forward to your answers.

Marcel Hergaarden

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html