I´m trying to rsync a 210 GB Filesystem with approx 1.500.000 Files.
Rsync always dies after about 29 GB without any error messages.
I´m Using rsync version 2.5.5 protocol version 26.
Has anyone an idea ?
Thank´s Clemens
--
To unsubscribe or change options:
On Thursday 2002-05-16 15:55, Dave Dykstra wrote:
| I'm afraid we've got too much history behind the current way to change
| that now. Undoubtedly there's lots of scripts around that expect the
| current behavior. The --log-format option is intended for situations
| like yours. Try
Combined reply:
Mark - Point taken. But even if it worked correctly everywhere, to me there
seems to be something aesthetically wrong about just letting sockets close
themselves. Kind of like a malloc() without a free().
Wayne - Wouldn't the atexit solution require that we keep a list of fds to
He's on two different cells so I don't think he would be able to
do what you're stating. I think he would have to have a db server
in his environment thats part of the source cell. But I'm not sure
about that.
sri
On Fri, May 17, 2002 at 11:34:45AM -0600, [EMAIL PROTECTED] wrote:
Doesn't AFS
Hi
Thanks for info.
Let me clear what I want to achieve from this.
Right now first1(/afs/tr/software ) is running has a master and a period of
time second1(/afs/ddc/software) will take over, after that first1 will not
exists. I am in the process of transition work. Currently first1 is owned
by
Allen, John L. wrote:
In my humble opinion, this problem with rsync growing a huge memory
footprint when large numbers of files are involved should be #1 on
the list of things to fix. It seems that every fifth post is a
complaint about this problem! Sorry if this sounds like ungrateful
In my humble opinion, this problem with rsync growing a huge memory
footprint when large numbers of files are involved should be #1 on
the list of things to fix.
I think many would agree. If it were trivial, it'd probably be
done by now.
Fix #1 (what most people do):
Split the
On Fri, 17 May 2002, Allen, John L. wrote:
In my humble opinion, this problem with rsync growing a huge memory
footprint when large numbers of files are involved should be #1 on
the list of things to fix.
I have certainly been interested in working on this issue. I think it
might be time to
OK, but I'm not exactly sure what I'm looking for...
I don't think the link error is caused by my data (I have no symlinks).
For whatever reason, it appears that a blank is leading the file list,
and the 'stat' on NULL is what is causing the link_stat error.
write(1, b u i l d i n g f i
Wayne: If anybody can make that work, I'd bet you could. The basic rsync
algorythm is in place, so as you say, it would mostly be a matter of list
generation. You'd have to hold on to any files with 1 link, in a
seperate list, to find all the linkage relationships, which could grow a
bit,
10 matches
Mail list logo