> From: Dan Phoenix <[EMAIL PROTECTED]>
> Date: Thu, 28 Sep 2000 13:31:25 -0700 (PDT)
>
>
> Ya that is one major problem also......
> For now i guess this is solution...
> but if I use perl and forked each process with rsync
> then it would create multiple threads and take less time
> to update all webservers with less broken link time.
There was a recent thread with the subject line "rdist vs. rsync" which talked
about how the sending side has large memory requirements and pushing in
parallel in this way requires *huge* amounts of memory, while pulling in
parallel (which I've done) isn't unreasonable.
In short, you may need to buy more memory on the pushing server to do it this
way effectively. I'd watch the memory footprint for a single push and then do
the multiplication.
Chris
--
Chris Garrigues http://www.DeepEddy.Com/~cwg/
virCIO http://www.virCIO.Com
4314 Avenue C
Austin, TX 78751-3709 +1 512 374 0500
My email address is an experiment in SPAM elimination. For an
explanation of what we're doing, see http://www.DeepEddy.Com/tms.html
Nobody ever got fired for buying Microsoft,
but they could get fired for relying on Microsoft.
PGP signature