On Sun, 31 Jul 2005, Archaic wrote:

> On Sun, Jul 31, 2005 at 01:07:21PM +0100, Ken Moffat wrote:
> >
> >  Thanks for that interesting comment.  I can see it could happen to all
> > of the backups (first I rsync to the other machine, then I roll down a
> > series of backups and rsync the "staging" copy to the newest of the
> > series).  But surely rsync wouldn't alter the original tarball ?
>
> It depends. Does it touch the original tarball? That is, is it
> responsible for moving the original to it's location?

 Yes, rsync on the client (the old server) copies it to the nfs mount
which is the staging area on the new server.  A separate rsync on the
new server then processes that to the backup area, which also gets
exported read-only.

 Funky, not secure in a hostile environment (hosts are trusted to write
to the correct staging directory, i.e. the one which matches who they
really are), and uses about 2.2 to 2.5 times the space of the data,
dependng on how much is volatile.  In my defence, the only physical user
is me, and I have a number of different boxes, some with more than one
system (current, experimental, and old systems that I might use
occasionally).  The design assumes any particular client "system" (the
combination of the box and the partition) may be powered off for
several weeks, and will initiate the backups when it chooses.  It may
even be powered-down while it is trying to back up - the server only
processes data from clients after they mark it as ready.  The
share-to-all part is because the clients are moving to dhcp and at the
moment they aren't routable.

>
> >  How long ago did you give up on rsync ?
>
> The first time was about a year ago. The I tried again 6 months ago.
> After a few days of running tests on binaries (tarballs, etc.) again the
> md5 sums had changed. Every single binary that was updated on the server
> had a non-matching md5 on the client. I didn't note how many of the
> non-updated binaries failed (that is, the ones that were initially
> rsynced and then never changed), but at least some of them did. I gave
> up for the final time and decided rsync's proprietary algorithms just
> don't produce identical binary copies even if the altered file is in all
> practical respects the same and valid.
>

 I guess I'll have to try md5s on the clients and the backups (obviously
not while backups are being processed) to see how they look.  So far
I've only noticed the problems from the old server, but that is where
all significant downloads get stored.

> >  I just write a series of uncompressed tarballs, created from tarring up
> > filesystems.  Now that I've got copies of everything on the second
> > server, it was easiest to create the tarballs there. Then I mounted them
> > over nfs and copied them to the old server ready to write to tape from
> > the staging directory.  After the problems, I decided it was best to
> > take md5sums after copying : two differed (out of about 10 copied).
> > Repeated, this time copying with scp : one of the two was ok, but the
> > other had to be copied twice before I got the correct md5.
>
> That is bad. That sounds like a hardware problem. I would generally
> point to UDP protocol limitations with failures in an NFS transfer, but
> with scp it is hard to blame the protocol. Unless, of course, you have a
> flaky TCP/IP stack on one of the boxes.
>
>

 NFS over TCP here, my friend.  The old server is running LFS-4.1-ish,
with a somewhat newer 2.4 kernel, nfs-utils from a year or two ago, and
up-to-date rsync, everything else in current use is on recent 2.6
kernels, new server is 2.6.11.something. But I may be installing 5.{0,1}
desktops to test package updates, if I can motivate myself.

 I guess my priorities have to be sorting out my pure64 builds and
getting another lvd cable so that I can migrate to the new server sooner
rather than later.

 Thanks for the response.

Ken
-- 
 das eine Mal als Tragödie, das andere Mal als Farce

--
http://linuxfromscratch.org/mailman/listinfo/lfs-chat
FAQ: http://www.linuxfromscratch.org/faq/
Unsubscribe: See the above information page

Reply via email to