Use cpio rather than rsync to copy the systems across.  It will preserve
hard links (you should be able to use cpio first, then rsync to update it).

ie

cd /vservers
find . -depth -print0 | cpio -0o -H crc | ssh -C host "cd /vservers && cpio -idmuv"

Good luck,
Sam.

On Friday 22 November 2002 19:24, Cathy Sarisky wrote:
> These directions work great, thanks for sharing them!  The ability to move
> a vserver so easily is wonderful.
>
> I have just one question/comment:  Moving a group of vservers with rsync
> doesn't preserve file unification, so rsyncing a handful of vservers takes
> a LONG time and consumes a lot of disk space on the server one is rsyncing
> to, until a vunify run anyway.  (I had a 500MB unified vserver that
> required 2.5GB disk space after moving, for example.)
>
> Any thoughts (or scripts to share) anyone about backing up vservers more
> efficiently?
>
> TIA,
> Cathy
>
> ---------- Original Message ----------------------------------
> From: "Geoffrey D. Bennett" <[EMAIL PROTECTED]>
> Date: Tue, 5 Nov 2002 11:23:17 +1030
>
> >You'll also want --devices, --group, and --owner, but --archive (or
> >-a) is far less typing than "--recursive --times --perms --links
> >--devices --group --owner".  You might also want --hard-links.
> >
> >I have 'export RSYNC_RSH=ssh' in my profile, and use this form all the
> >time:
> >
> >rsync -vazP /vservers/0001/ machine-b:/vservers/0001
> >
> >BTW (for anyone who's interested), I did my first vserver move from
> >one machine to another the other week, and it went very nicely.  To
> >minimise downtime, I did things like this:
> >
> >- an rsync before stopping any services to copy the bulk of the data
> >  (this took a while)
> >
> >- an rsync after stopping httpd, postgresql, cron, etc. and just
> >  leaving the most important authentication/accounting service running
> >  (this took about 10 minutes, mostly due to the postgresql data files
> >  which had changed)
> >
> >- an rsync after stopping the vserver (this didn't take long at all
> >  since there were only a few logs changed)
>
> ________________________________________________________________
> Sent via the WebMail system at webmail.pioneernet.net

Reply via email to