If it does the whole directory and then shows the output, it could seem slow.

I wrote a perl wrapper that backs up a whole directory and then prints the output. Is seems slower than running rsync manually because there is no output until the end.

On January 6, 2019 8:10:13 PM Peter Sjöberg <[email protected]> wrote:

On 2019-01-06 5:49 p.m., J C Nash wrote:
The title says it. I have rsync set up to run from the Double Commander 2 pane file manager. If I select parallel directories of modest size, rsync runs fine. (There are
very few or no updates. I'm just trying to ensure all is up to date.)

However if I choose a high level directory on each side, it goes for a while then seems to
just hang.

I'm wondering if I've overflowed some sort of index or buffer. Suggestions?
Copy huge directories has never been a problem for me as long as space
is available. I routinely copy between two NAS servers (old/current and
2BE) and there it can be several million files and >2TB of data to copy.
This works fine both for the initial copy and subsequent
refresh/updates, no problem there. Now this is done under Centos6/7 on
servers with 16/32GB of memory so if you try something similar on a
512MB system you might have an issue.

One thing that causes hangs for me (and been an itch for a long time) is
when I copy a something and the destination runs out of space. Then it
just hangs and easiest fix is to kill it on both sides, fix space on
destination and start over.


/ps




To unsubscribe send a blank message to [email protected]
To get help send a blank message to [email protected]
To visit the archives: https://lists.linux-ottawa.org

Reply via email to