Infra is not really a concern for me, it's more will the software handle it.

Network connectivity and IOPs aren't a huge concern (machines will be
speced to be "beasts" as well as network.

de-dupe for me is not a massive want. 99.9% of the files we will be backing
up are 100% unique. I'd rather spend the time shuffling data then de-dupe
it.

For me it's more of a is BPC even a remotely viable solution. Other studio
uses netbackup and I'd rather sit on a razor than use it to be honest so
I'm out fact finding.

I've used BPC v3 but to back up /etc off servers. Never to handle PBs of
data.

On 8 February 2017 at 19:23, Adam Goryachev <
mailingli...@websitemanagers.com.au> wrote:

> It would be interesting to hear if anyone is using it at that scale...
>
> I expect that if you were going to try that, then you should use BPC v4
> (because it has the ability to avoid transferring a file if another
> client has already backed it up) and assuming you are going to use rsync
> everywhere.
>
> I would also think that you will need to use multiple servers, otherwise
> you will spend a huge amount on getting enough CPU performance in the
> machine (all that encryption, compression, etc takes CPU), and RAM
> (storing lots of data and cache in RAM) and also the massive IOPS
> required. Consider the number of clients, you say 100s, so assume 300.
> You need a backup every day, and assume each backup takes 45 minutes.
> You need a concurrency of at least 30 to complete all the backups in
> under 8 hours (assuming an 8 hour backup window). OTOH, you could have 6
> servers each handling 5 concurrent backups, and hence have 50 clients on
> each server. This loses some of the de-dupe, but it makes everything a
> lot easier.
>
> Personally, you could extend this further (depending on budget, risk,
> etc) and have 12 servers, such that each client is backed up by two
> different servers. A failure of one server or its underlying storage/etc
> will mean you do not lose all the backups.
>
> Someone (on the list earlier) suggested that it would be possible to
> have BPC v4 use an object storage backend, so as long as that backend
> could perform fast enough, then it may be one way to get a larger "farm"
> of BPC servers but still keep de-dupe across all of them.
>
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to