Re: rsync to a destination > 8TB problem...

2004-08-10 Thread Matt Miller
Actually... a cp -r has the same issues... anyone seen this?

BTW, I'm on SuSE 9.1 (personal) with the 2.6 kernel, lvm2, reiserfs, 2G RAM, 2G swap

matt

On Aug 10, 2004, at 4:41 PM, Matt Miller wrote:

I am trying to use a large (10TB) reiserfs filesystem as an rsync target.  The filesystem is on top of lvm2 (pretty sure this doesn't matter, but just in case.)  I get the following error when trying to sync a modest set of files to that 10TB target (just syncing /etc for now):

rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe
rsync error: error in rsync protocol data stream (code 12) at io.c(836)

When I encountered the error, I suspected a problem with the filesystem size, so I started testing with 2TB, 4TB, 8TB filesystems.  (I started with 2TB and grew the LV and resized the reiserfs each time.)  Just under 8TB (7.45TB) everything worked fine.  After going just over 8TB (8.45TB) I got the error again.

Just for kicks, I tried syncing a single new file and that worked.  I then tried syncing a single new directory, and that worked as well.  You can see below these steps, as well as the directory the original /etc sync failed on.  I have listed the contents of that directory and the following directory in case it has pertinence here.

Is anyone else using > 8TB targets for rsync with success?  I have kept this to a local rsync to eliminate variables with ssh/rsyncd/network.

thanks in advance,

matt



Matt Miller
IT Infrastructure
Duke - Fuqua School of Business-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html
Matt Miller
IT Infrastructure
Duke - Fuqua School of Business
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: rsync erroring out when syncing a large tree

2004-08-10 Thread Wayne Davison
On Tue, Aug 10, 2004 at 11:28:04AM +0100, Mark Watts wrote:
> rsync: connection unexpectedly closed (289336107 bytes read so far)
> rsync error: error in rsync protocol data stream (code 12) at io.c(189)

This error just means that the other side went away for some reason.
What you need to figure out is why.  The version of rsync in CVS (and in
the "nightly" tar files) tries to grab a dying error from the other
side, if possible, but if it crashed there is no such error to return.

So, I'd recommend trying the CVS version of rsync to see if that gives
you the actual error of what failed.  Alternately (or additionally),
follow the directions on the issues page to get a core dump and/or a
system-call trace from the remote side to get extra information on what
has gone wrong:

http://rsync.samba.org/issues.html

..wayne..
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: out of memory in receive_file_entry rsync-2.6.2

2004-08-10 Thread Eberhard Moenkeberg
Hi,

On Tue, 10 Aug 2004, James Bagley Jr wrote:

> I've had some problems using rsync to transfer directories with more than
> 3 million files.  Here's the error message from rsync:
> 
> 
> ERROR: out of memory in receive_file_entry
> rsync error: error allocating core memory buffers (code 22) at util.c(116)
> rsync: connection unexpectedly closed (63453387 bytes read so far)
> rsync error: error in rsync protocol data stream (code 12) at io.c(342)
> 
> 
> I'm doing a pull on a linux system fom the HP-UX system that actually
> houses the data.  Both using rsync-2.6.2.  The one solution i've come up
> with isn't pretty, but seems to work.

How is your RAM/swap situation on the linux side?

I am rsyncing the 4 million files of ftp.gwdg.de to a backup server each 
night, one-shot, no problems. It just needs 7 or more hours...

rsync[28526] (receiver) heap statistics:
  arena: 233472   (bytes from sbrk)
  ordblks:9   (chunks not in use)
  smblks: 3
  hblks:   1282   (chunks from mmap)
  hblkhd: 357650432   (bytes from mmap)
  allmem: 357883904   (bytes from sbrk + mmap)
  usmblks:0
  fsmblks:   96
  uordblks:   71072   (bytes used)
  fordblks:  162400   (bytes free)
  keepcost:  135048   (bytes in releasable chunk)

Number of files: 4099094
Number of files transferred: 27160
Total file size: 1568423288614 bytes
Total transferred file size: 16916178416 bytes
Literal data: 10758972422 bytes
Matched data: 6158385953 bytes
File list size: 119674315
Total bytes written: 11990441
Total bytes read: 10885842335

wrote 11990441 bytes  read 10885842335 bytes  410271.35 bytes/sec
total size is 1568423288614  speedup is 143.92

rsync[26637] (generator) heap statistics:
  arena: 233472   (bytes from sbrk)
  ordblks:8   (chunks not in use)
  smblks: 3
  hblks:   1285   (chunks from mmap)
  hblkhd: 357924864   (bytes from mmap)
  allmem: 358158336   (bytes from sbrk + mmap)
  usmblks:0
  fsmblks:   96
  uordblks:   90760   (bytes used)
  fordblks:  142712   (bytes free)
  keepcost:  135048   (bytes in releasable chunk)
 end    RC=0  040810.0100 040810.0822


Cheers -e
-- 
Eberhard Moenkeberg ([EMAIL PROTECTED], [EMAIL PROTECTED])
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync to a destination > 8TB problem...

2004-08-10 Thread Matt Miller
I am trying to use a large (10TB) reiserfs filesystem as an rsync target.  The filesystem is on top of lvm2 (pretty sure this doesn't matter, but just in case.)  I get the following error when trying to sync a modest set of files to that 10TB target (just syncing /etc for now):

rsync: writefd_unbuffered failed to write 4 bytes: phase "unknown": Broken pipe
rsync error: error in rsync protocol data stream (code 12) at io.c(836)

When I encountered the error, I suspected a problem with the filesystem size, so I started testing with 2TB, 4TB, 8TB filesystems.  (I started with 2TB and grew the LV and resized the reiserfs each time.)  Just under 8TB (7.45TB) everything worked fine.  After going just over 8TB (8.45TB) I got the error again.

Just for kicks, I tried syncing a single new file and that worked.  I then tried syncing a single new directory, and that worked as well.  You can see below these steps, as well as the directory the original /etc sync failed on.  I have listed the contents of that directory and the following directory in case it has pertinence here.

Is anyone else using > 8TB targets for rsync with success?  I have kept this to a local rsync to eliminate variables with ssh/rsyncd/network.

thanks in advance,

matt



Matt Miller
IT Infrastructure
Duke - Fuqua School of Business-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: out of memory in receive_file_entry rsync-2.6.2

2004-08-10 Thread James Bagley Jr
On Tue, 10 Aug 2004, Widyono wrote:

> I've rsync'ed 7413719 files successfully (2.6.2 on RH7.2 server, Fermi
> SL3.0.1 client), 680GB.  When I do the top level directory all at
> once, there are several points where it locks up the server for
> minutes at a time (directories with large #'s of files, it seems, and
> I suppose it's an ext3 issue).

Odd.  I must be seeing an rsync on HP-UX issue then.  I'm only transfering
110G and 3276154 files.  But, they are stored on an HP-UX 11.11i server
with a VxFS file system.

> Better ideas?  No.  However, my suggestion would be to run a nightly
> script on the *server* side (if you have access) which counts files,
> and puts tallies in selected higher-level directories.  So,
> e.g. /.filecount would have # of files in /tmp, /usr, /var, etc.
> /usr/local/src/.filecount would have # of files in all its subdirs.
> This prevents you from ssh'ing in and find'ing so many times.
> Depending on how dynamic your disk utilization is, you could just make
> this a weekly or monthly analysis.

That could work out well.  Currently the find processes only take 5-10
minutes total, which is a very small percentage of the window of time I
have to run my rsync's.

--
James Bagley|   CDI Innovantage
[EMAIL PROTECTED]   | Technical Computing UNIX Admin Support
   DON'T PANIC  |   Agilent Technologies IT
Phone: (541) 738-3340   |  Corvallis, Oregon
--
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: out of memory in receive_file_entry rsync-2.6.2

2004-08-10 Thread Widyono
On Tue, Aug 10, 2004 at 11:36:26AM -0700, James Bagley Jr wrote:
> Hello,
> 
> I've had some problems using rsync to transfer directories with more than
> 3 million files.  Here's the error message from rsync:
>
> ERROR: out of memory in receive_file_entry
> rsync error: error allocating core memory buffers (code 22) at util.c(116)


I've rsync'ed 7413719 files successfully (2.6.2 on RH7.2 server, Fermi
SL3.0.1 client), 680GB.  When I do the top level directory all at
once, there are several points where it locks up the server for
minutes at a time (directories with large #'s of files, it seems, and
I suppose it's an ext3 issue).

The server side hit 1GB of memory near its peak, and the client side
hits 540MB of memory.  Ick.  At least when I upgraded to 2.6.2, it was
possible to do this at all (compared with the version provided by
RedHat's RPMs).

For future sanity, I'm subdividing the top level directory into
several discrete rsyncs on subdirectories.  I like your idea in
general (though I agree it's ugly) for dynamicly addressing this
issue, but for now I can afford the luxury of manually subdividing the
tree.

Regards,
Dan W.

Better ideas?  No.  However, my suggestion would be to run a nightly
script on the *server* side (if you have access) which counts files,
and puts tallies in selected higher-level directories.  So,
e.g. /.filecount would have # of files in /tmp, /usr, /var, etc.
/usr/local/src/.filecount would have # of files in all its subdirs.
This prevents you from ssh'ing in and find'ing so many times.
Depending on how dynamic your disk utilization is, you could just make
this a weekly or monthly analysis.
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


out of memory in receive_file_entry rsync-2.6.2

2004-08-10 Thread James Bagley Jr
Hello,

I've had some problems using rsync to transfer directories with more than
3 million files.  Here's the error message from rsync:


ERROR: out of memory in receive_file_entry
rsync error: error allocating core memory buffers (code 22) at util.c(116)
rsync: connection unexpectedly closed (63453387 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(342)


I'm doing a pull on a linux system fom the HP-UX system that actually
houses the data.  Both using rsync-2.6.2.  The one solution i've come up
with isn't pretty, but seems to work.

Basically, I wrote a shell function that runs an rsync process for each
subdirectory if necessary.  I'm using "find | wc -l" to count the number
of files in the source path and then calling the function again for each
subdirectory if the number is more than 2mil.  Perhaps recursion is a bad
idea?  It's the only way I could think of to catch the case where all the
files exist in a single directory several levels below the top.  Anyways,
i'll take the function out of my script and paste it here but it may not
work right taken out of context.  YMMV.


function DC {
# "Divide and Conquer"
#  evil hack to get around rsync's 3mil file limit

# Accepts two arguments as the srcpath and dstpath.  Will not work
# if srcpath is local.

srchost=`echo $1 | awk -F: '{ print $1 }'`
srcdir=`echo $1 | awk -F: '{ print $2 }'`

num_files=`ssh [EMAIL PROTECTED] "find $srcdir | wc -l"`

if [ $((num_files)) -gt 200 ]
then
echo "WARNING!  file count greater than 2mil, recursing into subdirs."

for file in `ssh [EMAIL PROTECTED] "ls $srcdir"`
do
dstpath=`echo $2/$file`

DC $1/$file $dstpath
done
else
rsync $rsync_opts $1 $2
fi
}


Comments?  Better ideas?

--
James Bagley|   CDI Innovantage
[EMAIL PROTECTED]   | Technical Computing UNIX Admin Support
   DON'T PANIC  |   Agilent Technologies IT
--
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread Wayne Davison
On Tue, Aug 10, 2004 at 04:17:59PM +0300, victor wrote:
> I will try the first sugestion [to run multiple rsync commands].

Don't do that.  Read Tim Conway's reply instead.  You probably just want
to drop the '*' from your command (and leave the trailing '/').

..wayne..
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread Tim Conway
You want everything in wex into wex on the remote.  Gotcha.
Let's take a simple case.
wex contains a b c d e.
"/usr/local/bin/rsync -rsh=/usr/bin/rsh -r --delete --perms --owner 
--group /mail/spool/imap/user/wex/* 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex" on the commandline becomes 
"/usr/local/bin/rsync -rsh=/usr/bin/rsh -r --delete --perms --owner 
--group /mail/spool/imap/user/wex/a /mail/spool/imap/user/wex/b 
/mail/spool/imap/user/wex/c /mail/spool/imap/user/wex/d 
/mail/spool/imap/user/wex/e 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex".  The parameter ending in 
"*" is replaced by as many entries as there are in the directory, which 
can be quite a loth, and is your problem.  rsync is perfectly happy to 
handle freaking enormous numbers of files, but it has to find out about 
them.
Unless there's some valid reason why you want to avoid applying 
perms,owner,group to the wex directory itself, "/usr/local/bin/rsync 
-rsh=/usr/bin/rsh -r --delete --perms --owner --group 
/mail/spool/imap/user/wex/. 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex/." will work nicely, or 
perhaps "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r --delete --perms 
--owner --group /mail/spool/imap/user/wex 
[EMAIL PROTECTED]:/mail/spool/imap/user".  These will let rsync build 
the filelist itself, avoiding problems (and plain old inefficiencies) of 
argument passing.
Speaking of inefficiencies, unless you want to avoid maintaining symlinks 
devices (not likely to be there anyway) and times, you can improve 
readability by changing your commandline to "/usr/local/bin/rsync 
-rsh=/usr/bin/rsh -a --delete /mail/spool/imap/user/wex/. 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex/."  Letting it keep times 
synced lets it use them to optimize future syncs by not checksumming files 
that match in name/timestamp/size.  "-a" is a lot faster to type than 
"--owner --group --perms --times --links --recursive --devices"

Tim Conway
Unix System Administration
Contractor - IBM Global Services
desk:3032734776
[EMAIL PROTECTED]



I get this error when I try to copy a directory with a lot of files: 
"bash: /usr/local/bin/rsync: Argument list too long"

The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r 
--delete --perms --owner --group /mail/spool/imap/user/wex/* 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex".

BUT, if I try tris command it works: "/usr/local/bin/rsync 
-rsh=/usr/bin/rsh -r --delete --perms --owner --group 
/mail/spool/imap/user/* [EMAIL PROTECTED]:/mail/spool/imap/user/wex".


-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread Jan-Benedict Glaw
On Tue, 2004-08-10 16:17:59 +0300, victor <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> However, can you explain me why this command works?
> /usr/local/bin/rsync -r --delete --perms --owner --group 
> /mail/spool/imap/user [EMAIL PROTECTED]:/mail/spool/imap
> In /mail/spool/imap/user I have a lot of subdirectories with >1000 files.
> 
> What I mean is that rsync can make sutch a transfer(no mather the 
> kernel), but for some reason he does not.

It's purely a matter of this question's answer:

"Is a veeery long list of file names supplied at the command
line, or is rsync on it's own figuring out all the names
(possibly recursing through subdirectories with gazillions of
files)?"

If you supply "*" as a file name, rsync _never ever_ will see this
little star. Instead, the shell interpreter (where you've typed your
command) will substitute it with the looong list of names. If this list
exceeds a certain size (IIRC 128KB on PeeCees), the shell interpreter
will fail to start the rsync program (since the kernel cannot copy all
the data).

If you supply "." as a _directory_ name containing thousands of files,
it'll work just fine since "." isn't expanded by the shell interpreter,
but recursively read by rsync itself.

MfG, JBG

-- 
Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481 _ O _
"Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg  _ _ O
 fuer einen Freien Staat voll Freier Bürger" | im Internet! |   im Irak!   O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread victor
Thank you.
I will try the first sugestion.
However, can you explain me why this command works?
/usr/local/bin/rsync -r --delete --perms --owner --group 
/mail/spool/imap/user [EMAIL PROTECTED]:/mail/spool/imap
In /mail/spool/imap/user I have a lot of subdirectories with >1000 files.

What I mean is that rsync can make sutch a transfer(no mather the 
kernel), but for some reason he does not.

Jan-Benedict Glaw wrote:
On Tue, 2004-08-10 14:41:54 +0300, victor <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
 

I get this error when I try to copy a directory with a lot of files: 
"bash: /usr/local/bin/rsync: Argument list too long"

The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r 
--delete --perms --owner --group /mail/spool/imap/user/wex/* 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex".
  

The "*" you supply is expanded by the shell to a lot of filenames. Your
shell uses some internal buffer (which may grow to several megabytes),
but upon exec*(), the kernel cannot copy all the list to the (child's)
argv[] buffer, thus rejecting to exec() at all.
So this isn't actually a limitation of your shell, but of your operating
system. IIRC Linux will grant you some 128 KB on systems using 4KB pages
(that is, the famous PeeCee).
You've got several ways to work around that:
- Try to split your single rsync call into severals:
rsync a*
rsync b*
rsync c*
...
- Hack your operating system's kernel to allow a larger buffer
  for argv[]. (For Linux, you'll need to edit
  ./include/linux/binfmts.h; change MAX_ARG_PAGES to whatever
  you like better)
- Try to use xargs, but that may be tricky...
MfG, JBG
 

--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread Jan-Benedict Glaw
On Tue, 2004-08-10 14:41:54 +0300, victor <[EMAIL PROTECTED]>
wrote in message <[EMAIL PROTECTED]>:
> I get this error when I try to copy a directory with a lot of files: 
> "bash: /usr/local/bin/rsync: Argument list too long"
> 
> The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r 
> --delete --perms --owner --group /mail/spool/imap/user/wex/* 
> [EMAIL PROTECTED]:/mail/spool/imap/user/wex".

The "*" you supply is expanded by the shell to a lot of filenames. Your
shell uses some internal buffer (which may grow to several megabytes),
but upon exec*(), the kernel cannot copy all the list to the (child's)
argv[] buffer, thus rejecting to exec() at all.

So this isn't actually a limitation of your shell, but of your operating
system. IIRC Linux will grant you some 128 KB on systems using 4KB pages
(that is, the famous PeeCee).

You've got several ways to work around that:

- Try to split your single rsync call into severals:

rsync a*
rsync b*
rsync c*
...

- Hack your operating system's kernel to allow a larger buffer
  for argv[]. (For Linux, you'll need to edit
  ./include/linux/binfmts.h; change MAX_ARG_PAGES to whatever
  you like better)

- Try to use xargs, but that may be tricky...

MfG, JBG

-- 
Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481 _ O _
"Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur | Gegen Krieg  _ _ O
 fuer einen Freien Staat voll Freier Bürger" | im Internet! |   im Irak!   O O O
ret = do_actions((curr | FREE_SPEECH) & ~(NEW_COPYRIGHT_LAW | DRM | TCPA));


signature.asc
Description: Digital signature
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


> I get this error when I try to copy a directory with a lot of files:
> "bash: /usr/local/bin/rsync: Argument list too long"
>
> The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r
> --delete --perms --owner --group /mail/spool/imap/user/wex/*
> [EMAIL PROTECTED]:/mail/spool/imap/user/wex".
>
> BUT, if I try tris command it works: "/usr/local/bin/rsync
> -rsh=/usr/bin/rsh -r --delete --perms --owner --group
> /mail/spool/imap/user/* [EMAIL PROTECTED]:/mail/spool/imap/user/wex".
>
> So I see that rsync does not copies the files if in the root of the
> source directory are too many files. But if they are too meny files in
> any oter directory in works.
>
> I need the first command to work.
> How can I do this?
>
> Thank you.

Your shell is expanding the * to create an argument list that is too long.
It's an issue with your shell, not rsync.

- -- 
Mark Watts
Senior Systems Engineer
QinetiQ Trusted Information Management
Trusted Solutions and Services group
GPG Public Key ID: 455420ED

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBGLsPBn4EFUVUIO0RAkNkAKDzsNLYV5jUCzf+MH0z3yML/50RVQCg8r+9
hByFgOrVt9zXmGxftbmIVzI=
=OtuC
-END PGP SIGNATURE-
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread YOSHIFUJI Hideaki / 吉藤英明
In article <[EMAIL PROTECTED]> (at Tue, 10 Aug 2004 14:41:54 +0300), victor <[EMAIL 
PROTECTED]> says:

> I get this error when I try to copy a directory with a lot of files: 
> "bash: /usr/local/bin/rsync: Argument list too long"
> 
> The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r
> --delete --perms --owner --group /mail/spool/imap/user/wex/*
> [EMAIL PROTECTED]:/mail/spool/imap/user/wex".
:
> So I see that rsync does not copies the files if in the root of the 
> source directory are too many files. But if they are too meny files in 
> any oter directory in works.
> 
> I need the first command to work.
> How can I do this?

This is nothing to do with rsync, but your shell.
Anyway, RTFM, or you may want to try xargs(1).

--yoshfuji
-- 
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


bash: /usr/local/bin/rsync: Argument list too long

2004-08-10 Thread victor
I get this error when I try to copy a directory with a lot of files: 
"bash: /usr/local/bin/rsync: Argument list too long"

The exact command is: "/usr/local/bin/rsync -rsh=/usr/bin/rsh -r 
--delete --perms --owner --group /mail/spool/imap/user/wex/* 
[EMAIL PROTECTED]:/mail/spool/imap/user/wex".

BUT, if I try tris command it works: "/usr/local/bin/rsync 
-rsh=/usr/bin/rsh -r --delete --perms --owner --group 
/mail/spool/imap/user/* [EMAIL PROTECTED]:/mail/spool/imap/user/wex".

So I see that rsync does not copies the files if in the root of the 
source directory are too many files. But if they are too meny files in 
any oter directory in works.

I need the first command to work.
How can I do this?
Thank you.
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


rsync erroring out when syncing a large tree

2004-08-10 Thread Mark Watts
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


I'm trying to sync a mandrakelinux tree (~120GB) but it bombs out after a 
while with this error:

rsync: connection unexpectedly closed (289336107 bytes read so far)
rsync error: error in rsync protocol data stream (code 12) at io.c(189)
rsync: writefd_unbuffered failed to write 4092 bytes: phase "unknown": Broken 
pipe
rsync error: error in rsync protocol data stream (code 12) at io.c(666)

The exact command used is:

rsync -av --stats --progress --partial --delete-after --bwlimit=2000 
rsync://mirrors.usc.edu/mandrakelinux/ /export/ftp/mandrakelinux

$RSYNC_PROXY is set to be a squid proxy, which is very reliable for other 
forms of transfer (ftp/http)

It's synching to a 1.6TB raid array, which still has over 1TB free, so its not 
a space issue.

Anyone know a) what causes this error and b) how I can get round it?

Cheers,

Mark.

- -- 
Mark Watts
Senior Systems Engineer
QinetiQ Trusted Information Management
Trusted Solutions and Services group
GPG Public Key ID: 455420ED

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFBGKM0Bn4EFUVUIO0RAqsCAJ4z1Gf2P0ZBGJXsVhHamXBHRMxrRACgtV56
NbrLgC5pkaKhdX202v41JU8=
=l61M
-END PGP SIGNATURE-
--
To unsubscribe or change options: http://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html