Sadly this one is a bit of a non-starter.

The client really only wants to use Centos/RHEL and ZFS is not part of that mix at the moment.

The data is actually sitting on a replicated Gluster cluster so trying to replace that with an HA NAS would start to get expensive if it were a commercial product.

On 5/4/20 11:27 AM, John Sellens wrote:
I bet no one would want this advice, but it seems to me that the
implementation needs to change i.e. that one big (possibly shallow)
filesystem on xfs is unworkable.

The best answer of course depends on the value of the data.

One obvious approach is to use a filesystem/NAS with off-site replication.
Typically a commerical product.

At relatively modest cost, I like the truenas systems from ixsystems.com.
ZFS based, HA versions available, replication can be done.
The HA versions are two servers in one chassis, with dual-ported SAS disks.

For do-it-yourselfers, I like using ZFS and ZFS replication of snapshots.
For example, I do much (much) smaller offsites from my home to work
using ZFS and zfs-replicate.

You can also do freenas (non-commercial truenas) but without the HA
hardware and code.

Hope that helps - cheers

John


On Mon, 2020/05/04 09:55:51AM -0400, Alvin Starr via talk <talk@gtalug.org> 
wrote:
| The actual data-size for 100M files is on the order of 15TB so there is a
| lot of data to backup but the data only increases on the order of tens to
| hundreds of MB a day.

--
Alvin Starr                   ||   land:  (647)478-6285
Netvel Inc.                   ||   Cell:  (416)806-0133
al...@netvel.net              ||

---
Post to this mailing list talk@gtalug.org
Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

Reply via email to