Strange. My rsync was definitely using swap. I only have 3GB of ram in that
server. Each rsync process (both the client and server) would get above 3GB
of memory usage. I know this isn't shared memory, since my swap partition
was well in use (around 3.5GB, can't remember the exact number but quite a
few other processes were taking up quite a bit of virtual memory without
have a huge amount of resident memory).
>From what I understand, swap (as most people refer to it, aka paged memory)
is managed by the operating system, and not the application. In fact, in
order to avoid memory being swapped out to disk, you have to go out of your
way by locking the memory. Gnupg does this to prevent people from stealing
the keys from the swap file. There are some applications (I encounter quite
a few on the cluster I run) that generate their own swap file manually (as
in they had to code for it, they treat it more as a database than memory),
but this is generally because they were written to run with large data sets
in a 32 bit environment. I think this is the functionality you are referring
to as well as the developers on the mailing list. It is pretty nasty to code
for, and I can understand why the rsync guys wouldn't want to do it when the
64 bit address space is right around the corner.
Anyway, not like it matters. I'm using the method you suggested earlier of
rsyncing each directory.
Thanks for you help.
On Sat, Mar 29, 2008 at 11:48 PM, dan <[EMAIL PROTECTED]> wrote:
> rsync wont use swap, it will only use RAM. This can be found on google or
> that mailing lists. rsync developers say that it will take nothing short of
> a rewrite to get rsync to use a on-disk log rather ram-log of the transfer
> files so a qemu with a bunch of swap is not going to do you any good. you
> need real RAM. 35M files x ~100Bytes per file for rsync is 3.26GB of RAM
> needed. the initial scan from rsync will take ages and a restart will take
> ages if the network link huckups. you really need to cut down on the amount
> being transfered at any one time, so you basically need to transfer each of
> the subdirectories of pc/ seperately. also, you cannot transfer the
> cpool/pool directories effectively unless you can rsync the whole $TopDIR.
> the hardlinks wont resolve out if you do the transfers seperately so you
> will end up transfering the file, not just the hardlink. also, the files
> in the pc/ directory wont be a hardlink to the identical copy in pool/cpool
> so you will each up at LEAST twice the disk space.
>
> best solution is to transfer each pc subdirectory seperately, then re-link
> all the files essentially rebuilding the pool/cpool directories.
>
>
> On Sat, Mar 29, 2008 at 5:32 AM, Tino Schwarze <[EMAIL PROTECTED]>
> wrote:
>
> > Hi James,
> >
> > On Fri, Mar 28, 2008 at 01:51:43PM -0500, James Harr wrote:
> >
> > > I need to move the archive for backuppc from one NAS to another. We're
> > > dealing with around 800GB of data (35 million files) over a 100 Mbit
> > > connection (for the time being, will be 1Gbit in the future), so the
> > move
> > > will take more than one day.
> > >
> > > - dump & restore is out of the question since each NAS is basically a
> > sealed
> > > box.
> > > - I'd like to avoid just 'cp'ing it across since it will take a long
> > time
> > > and I'd like to maintain our backup schedule while we transfer.
> > > - rsync -aH pukes because it's running out of address space (the 35
> > million
> > > files are hard to keep track of)
> > > - rsync -a will work, but it doesn't do the hardlinks.
> >
> > I had a similar situation last year (you may want to look up my posts in
> > the archive).
> >
> > IIRC I managed to copy the whole pool (about 500 GB) via the following
> > steps:
> > 1. rsync -a only the pool and cpool
> > 2. run BackupPC_tarPCCopy for each pc, transferring it's output to the
> > destination machine und unTARing there (I did that via netcat).
> >
> > Altogether this took over a week, but I suppose this is a performance
> > issue with the RAID/LVM/FS combination I'm using.
> >
> > > The last part got me to thinking -- if I could manually run the
> > backuppc
> > > linker, I could rsync the backups, and just have the linker take care
> > of the
> > > rest. Does anyone know how to run the linker like this? If there isn't
> > a
> > > convenient way right now, could I bug one of the developers to add a
> > --force
> > > flag to it so it'd go through and scan all the files?
> > >
> > > I suppose one other option is to build a 64 bit virtual machine with
> > qemu,
> > > give it a large amount of swap, and then use that to do the rsync -aH
> > :P
> >
> > We're talking about tens of GB of rsync memory here. So you'd probably
> > wait months for that to complete. :-|
> >
> > HTH,
> >
> > Tino.
> >
> > --
> > www.craniosacralzentrum.de
> > www.spiritualdesign-chemnitz.de
> > www.forteego.de
> >
> > Tino Schwarze * Lortzingstraße 21 * 09119 Chemnitz
> >
> >
> > -------------------------------------------------------------------------
> > Check out the new SourceForge.net Marketplace.
> > It's the best place to buy or sell services for
> > just about anything Open Source.
> >
> > http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
> > _______________________________________________
> > BackupPC-users mailing list
> > [email protected]
> > List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> > Wiki: http://backuppc.wiki.sourceforge.net
> > Project: http://backuppc.sourceforge.net/
> >
>
>
> -------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It's the best place to buy or sell services for
> just about anything Open Source.
>
> http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
> _______________________________________________
> BackupPC-users mailing list
> [email protected]
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
BackupPC-users mailing list
[email protected]
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/