Hi Michael,

I find your report useful and interesting. However my experience is a bit 
different from yours.

On Sat, 9 Jan 2016, Michael wrote:

> Hello,
> 
> I've been testing BackupPC 4.0.0alpha3 for 1 year now, for backing up 12
> home machines, and to be honest, I'm quite unhappy with it.
> To my opinion, it is completely unreliable, you have to regularly check
> whether backups are done correctly, and most of the time you can't do a
> backup without at least an error. And it's awfully slow. The big
> advantage of BPC (besides being free and open-source of course) is to
> manage backup of multiple machines in a single pool, hence saving space.

I've been using v4 in production for almost 10 months, and I disagree. I have 
found v4 to be very stable and useful. I have two v4 servers (along with at 
least 3 older v3 servers) and the largest v4 install backs up 96 hosts

  - 534 full backups of total size 3495.74GiB (prior to pooling and 
compression),
  - 2563 incr backups of total size 18496.62GiB (prior to pooling and 
compression).

I find v4's speed to be better than v3 and do not see any more errors than I 
did with v3 respecting xfers, bad files, etc. In fact, I can only find 1 
instance in my logs and that's due to backing up an open file. Now with either 
v3 or v4, if you try to back up the wrong files you'll encounter lots of pain 
(and errors). Are you excluding special files? The exact list may vary somewhat 
on your clients' distro and your site policy.

For Ubuntu at my site, I currently exclude /proc, /sys, /tmp, /var/tmp, 
/var/cache/openafs, /var/cache/apt/archives, /var/log/lastlog, 
/var/lib/mlocate, /var/spool/torque/spool, /home, /afs, /scratch*, 
/not_backed_up*, /vicep*, /srv*, /spare*, /media, /mnt*

For select hosts, /home is backed up separately, using ClientNameAlias.

My speeds vary from over 100MiB/s (when backing up a few new sparse files) to 
0.51MiB/s (a tiny incremental where it took a while to determine there was 
nothing to do). Average for a 3GB full seems to be about 9MiB/s.

Notice that I'm not saying BPC v4 doesn't have bugs. I've found a couple of 
them and reported one - with a possible solution - to the -devel list. But any 
new software is likely to have bugs and this is reflected in the fact that v4 
is still alpha. You're supposed to be *testing* v4 at this point. I'm using it 
in production because I think its pros outweigh its cons and I'm willing to 
hack the code (or otherwise suffer consequences) when I encounter bugs.

> My current backup pool is ~ 12 machine. 11 on Linux and 1 windows
> machine. My backup machine is a 3TB Lacie-Cloudbox, with 256 MB memory.
> Some of you might say that 256 MB is not enough. Actually I've even seen
> posts on the net saying that you would need a server with several GB
> RAM. This is just insane. A typical PC in my pool has ~600k files.
> Representing each of them with a 256-bit hash, that's basically 20MB of
> data to manage for each backup. Of course you need some metadata, etc,
> but I see no reason why you need GB of memory to manage that.

You probably don't want to hear it, but the Cloudbox probably is your 
bottleneck. A colleague once experimented with BackupPC on a Synology Disk 
Station. It worked. But it was quite slow. We eventually turned the Synology 
into an iSCSI target hanging off a commodity PC with 2GB of RAM and the speed 
increased a lot (unfortunately I didn't benchmark it as I knew it was a win-win 
to separate the CPU from the storage and didn't hesitate to do so).

My current server for the 96 hosts is a recycled Dell PE 1950 (8 cores, 16 GB 
RAM) backing up to a Dell PE 2950 (4 cores, 8GB RAM) via 4Gbit FC. BPC runs on 
teh 1950; the 2950 is just a storage server. Clients are mostly on 1Gbit 
ethernet. Server and clients are Ubuntu.

> If I would participate to the development of BPC, I would make more
> changes to the architecture. I think that the changes from 3.0 to 4.0
> are very promising, but not enough. The first thing to do is to trash
> rsync/rsyncd and use a client-side sync mechanism (like unison).

I think an *optional* client-side sync mechanism (like unison), implemented as 
an additional xfer option, is interesting. Especially if an end-user can 
manually initiate a backup or restore via a client interface (ala Crashplan but 
hopefully without the java dependency). However I'm bothered by a 
recommendation to "trash rsync/rsyncd". There's *zero* reason to eliminate 
those xfer methods and I think doing so would immediately make any fork much 
less likely to succeed.

> Then
> throw away all Perl code and rewrite in C.

This is the direction that v4 is headed, but I think the use of C should be 
judicious. There's little reason for some parts of the code (cgi, etc) to be in 
C.

Just my two cents.

Cheers,
Stephen

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311&iu=/4140
_______________________________________________
BackupPC-devel mailing list
BackupPC-devel@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-devel
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to