----- Original Message -----
> From: "Ski Kacoroski" <[email protected]>
> To: [email protected]
> Sent: Thursday, October 18, 2012 9:15:48 PM
> Subject: Backup Options
> 
> Hi,
> 
> I could use some advice on backup options.  I have a 4yr old Data
> Domain that has worked perfectly, but it is totally filled (actually
> overfilled) and pricey to maintain.  It is located at the remote
> site connected to my primary site by fiber and I just NFS mount it
> to my backup server.  A full backup is around 23TB and my backup set
> of fulls and incrementals around 90TB.  My data growth has been
> around 20% a year, but if the school district decides to move to
> student portfolios, it will easily double and maybe triple in a few
> years.  I am not a 7x24 shop so for all my applications and
> databases, I just dump the files at night and back them up.  I
> generate about 600GB of long term archive data a year that goes to
> LTO3 tape.  The primary purpose is for disaster recovery although we
> do about 1 - 2 file restores a month.  90% of the data in on an EMC
> VNX that I backup via NDMP.  So far I am safely within my backup
> window, but that may change if I double or triple the data.  Options
> I am looking at are:
> 
> 1. Plain disk with an nfs server on it, no dedup.  This is definitely
> the least expensive option and can grow cheaply to handle my worst
> case data growth
> 
> 2. Data Domain - very pricey as it is about 5x cost of option #1 for
> about the same logical capacity. At worst case data growth I will
> need another one or another forklift upgrade.
> 
> 3. Data Domain used -  does not come with software support, and about
> 1.5x cost of #1.  At worst case data growth I will need another one
> or another forklift upgrade.  I am concerned about lack of software
> support.
> 
> 4. A ZFS system with dedup.  About 2x the cost of #1, and from what I
> hear the dedup is not good for this application (e.g. backup
> software kind of breaks dedup on ZFS) so I am assuming minimal dedup
> savings.  This can grow to handle worst case data growth.
> 
> 5. 4 Drive, 48 slot LTO5 library.  Same cost as #1 and by swapping
> tapes once a week or every other week I can handle worst case data
> growth,
> 
> 6. Exagrid - I suspect this will be the same cost as the Data domain
> 
> Any other options I should be looking at?  What would you do in my
> case?

Thanks to everyone that commented on this.  My choices seem to be narrowing 
down to 2 categories:

#1. NDMP to either a dedup device (Data Oomain, etc.) or to ZFS with lots of 
disk (for data integrity).  This works nicely in that it backups/restores the 
CIFS and NFS acls on my multi-protocol file systems.  My question is will it 
scale if I end up with 100 or 200TB on the VNX?  I am assuming 10GB connections 
from the VNX to the backup server to the disk target.  Is anyone using NDMP to 
backup this amount of data? 

#2. Use a continuous incremental approach (Commvault, TiBS, etc.) where I only 
backup the changes each day.  This solves the possible scaling problem, but 
this approach backups via a NFS or CIFS share which mean they only see the acls 
of the protocol they used to access the share.  Does anyone use an approach 
like this and if so, what do you do about multi-protocol file systems.

cheers,

ski
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to