> From: [email protected] [mailto:discuss-
> [email protected]] On Behalf Of Ski Kacoroski
> 
> Thanks to everyone that commented on this.  My choices seem to be
> narrowing down to 2 categories:
> 
> #1. NDMP to either a dedup device (Data Oomain, etc.) or to ZFS with lots of
> disk (for data integrity).  This works nicely in that it backups/restores the 
> CIFS
> and NFS acls on my multi-protocol file systems.  My question is will it scale 
> if I
> end up with 100 or 200TB on the VNX?  I am assuming 10GB connections from
> the VNX to the backup server to the disk target.  Is anyone using NDMP to
> backup this amount of data?
> 
> #2. Use a continuous incremental approach (Commvault, TiBS, etc.) where I
> only backup the changes each day.  This solves the possible scaling problem,
> but this approach backups via a NFS or CIFS share which mean they only see
> the acls of the protocol they used to access the share.  Does anyone use an
> approach like this and if so, what do you do about multi-protocol file 
> systems.

My experience with scalability and backups suggests that the time to backup is 
the problem to focus on, rather than how you'll get enough storage.  Even with 
a netapp backing up via ndmp, the filer has to walk the entire filesystem 
searching for files that have changed since the last backup, and if you have a 
lot of files, that takes a long time.  On that system, a modest 4T system, the 
nightly incrementals were up to 10-12 hours per night when we were able to 
phase out that system in favor of ZFS.

IMHO, you need to have the ability to do instant block-level incremental 
snapshots.   ZFS does this.  Netapp does too, if you use Snapmirror (extra 
licensing.)  And various other vendors have larger more expensive enterprise 
solutions as well.

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to