Hi, backu...@kosowsky.org wrote on 2013-03-11 09:39:52 -0400 [Re: [BackupPC-users] BackupPC_dump memory usage]: > Les Mikesell wrote at about 08:08:15 -0500 on Monday, March 11, 2013: > > On Mon, Mar 11, 2013 at 4:59 AM, ashka <shellgrat...@gmail.com> wrote: > > > > > > I do have millions of files, I can't really get them out the backup > though. > > > Those files are really important, and I excluded them temporarily just to > > > see how it was working out. > > > > It would be sort-of interesting [...]
... to get some answers to the questions we are asking. It's not out of curiosity. I don't really care what your server looks like. I just don't like wasting my time with guesses when you couldn't be bothered to give more than a vague description of what you are seeing. > > to see how much difference it would > > make to run a 32-bit perl on the server. With a 32-bit executable, you've got a maximum of 3 GB total available user address space, I believe. Even if all data structures were only half as large (and that should be true only for integers and pointers, not for strings), you'd still need 5 GB, so you should run out of memory earlier. By all means, try it out. You'd be using a different build of File::RsyncP, so that might make a difference. Shouldn't really, but might. > > But the solution is > > probably [...] Right. Probably. This has been discussed often enough that without specific details we can just as well say "search the archives". > It would also be interesting to see how much memory a "naked" native > rsync of the same server takes up... Yes, that's simple enough to try. Does it complete, how much max. memory, how long? > [...] other than the needs of File::RsyncP, the > BackupPC_dump process itself shouldn't be consuming memory that scales > with number of files. That is also my understanding. So far, we haven't really established yet that it is in fact number of files and not something completely different going wrong (some sort of loop, maybe, for whatever reason). The original post is a good example of ignoring the basics. We don't know anything about the setup except that it's BackupPC 3.2.1. We don't even have an explicit confirmation that rsync is being used as XferMethod. For all we know, he might have ported File::RsyncP to <some obscure architecture> and introduced bugs in the process. > The only potential memory scaling of BackupPC > itself would be with the size of an individual file that needs to be > decompressed/compressed/compared but even so it should be mostly > buffered... Completely, I believe. There's the 1 MB buffer for determining the pool file name and possibly several smaller buffers for chain comparisons. RStmp is on-disk for a reason, though I can't say I completely understand how rsync transfers work in BackupPC. Files larger than 1 MB should be fairly common, though ;-). > So, I think it behooves you to figure out where these 10GB are being > consumed... Yep. Presuming they are consumed and you're not just reading the wrong column of your htop output. Regards, Holger ------------------------------------------------------------------------------ Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the endpoint security space. For insight on selecting the right partner to tackle endpoint security challenges, access the full report. http://p.sf.net/sfu/symantec-dev2dev _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/