Am 20.11.2012 22:31, schrieb Bowie Bailey: > You're right. I wasn't considering possible characters existing between > c and d. And your suggesting appears to be a good work around.
Allow me to jump back to my original 25-million-files-"problem": I came up with another strategy: I created a shell script which would run a "find -type d" like command on the client, and then take the output (all relevant directories on that client) and generate the BackupPC config file for that client on-the-fly. Every directory would be inserted as a share into the config file, with all sub-directories below it under BackupFilesExclude. This resulted in a separate rsync process to get spawned for each directory and the script was supposed to be run via cron once a day during the BlackoutPeriods (I also tried to execute the script via DumpPreUserCmd, but unfortunately, BackupPC does not re-read the configuration file after execution of DumpPreUserCmd). This way I would have always all directories covered and not have to worry about missed directories. I was hoping to eliminate the rsync memory / slowness problem with this strategy. However, once I ran the initial full backup, I realized that for every completed rsync run on a share, two "defunct" processes remain: [BackupPC_dump] <defunct> [ssh] <defunct> And after I had about 750 defunct processes (after about 375 rsync runs on 375 different shares/directories) the whole backup aborted with: Out of memory: Kill process 27787 (BackupPC_dump) score 647 or sacrifice child Then I tried one of the initially discussed approaches and now I have about 10 different shares in that clients' configuration. And I observed that the defunct processes are obviously normal while the whole backup process is still going on. After 6 successful rsync runs on 6 shares I have 12 defunct processes: bash-4.1$ ps auxww|grep defunct|grep -v grep|sed 's/\s\+/ /g' backuppc 1672 0.0 0.0 0 0 ? Z Nov24 0:55 [ssh] <defunct> backuppc 1673 0.0 0.0 0 0 ? Z Nov24 0:56 [BackupPC_dump] <defunct> backuppc 1721 0.0 0.0 0 0 ? Z Nov24 0:04 [ssh] <defunct> backuppc 1722 0.0 0.0 0 0 ? Z Nov24 0:35 [BackupPC_dump] <defunct> backuppc 1870 2.2 0.0 0 0 ? Z Nov24 51:40 [ssh] <defunct> backuppc 1892 4.2 0.0 0 0 ? Z Nov24 97:18 [BackupPC_dump] <defunct> backuppc 3501 0.0 0.0 0 0 ? Z Nov24 0:01 [ssh] <defunct> backuppc 3502 0.0 0.0 0 0 ? Z Nov24 0:02 [BackupPC_dump] <defunct> backuppc 3505 10.6 0.0 0 0 ? Z Nov24 203:08 [ssh] <defunct> backuppc 3510 24.5 0.0 0 0 ? Z Nov24 470:09 [BackupPC_dump] <defunct> backuppc 6232 0.5 0.0 0 0 ? Z 01:24 4:48 [ssh] <defunct> backuppc 6233 0.6 0.0 0 0 ? Z 01:25 6:28 [BackupPC_dump] <defunct> My question this time is, how come these processes are still consuming CPU (third column) even though the rsync runs on these shares have already completed? And I suppose they are also still consuming memory (see abort message above). Are these defunct processes really OK, and can I get rid of them to try my find-script-strategy without running out of memory? Thank you! Markus ------------------------------------------------------------------------------ Monitor your physical, virtual and cloud infrastructure from a single web console. Get in-depth insight into apps, servers, databases, vmware, SAP, cloud infrastructure, etc. Download 30-day Free Trial. Pricing starts from $795 for 25 servers or applications! http://p.sf.net/sfu/zoho_dev2dev_nov _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/