On Wed, Sep 17, 2003 at 10:12:14AM -0600, Rodolfo J. Paiz wrote: > At 10:56 9/17/2003 -0500, you wrote: > >This is true, but there is one problem in the disaster recovery universe > >that it does not resolve: offsite storage. If your backups are routinely > >kept in the same place as your servers, and you have a catastrophic event, > >you will have lost your data forever. > > > >Now, if your NAS is on the other end of a fast fiber connection a couple of > >miles down the road.... ;) > > As mentioned elsewhere, any media you use (and this argument is > specifically about media) does not in itself resolve the off-site backup > need. But you _can_ carry both tapes and hard drives in hotswap cages > off-site, or you _can_ do the fast fiber thing, or whatever. This is part > of the backup _strategy_ that needs solving but is, mostly, independent of > media issues.
I'll step back in here since people seem to want to slam tape. I happen to manage systems in a medium-size enterprise. One server alone has 3.5TB of storage. For that server, we take weekly full backups and plan to keep (most systems are already there but this one isn't yet) weekly full backups with 4 generations, monthly full backups with probably 12 generations, and several years of annuals. We're looking at least 15 copies of 3.5TB of data. This data is actually being backed over the network to a tape library in another building. If you can show me how to keep 52GB of offsite data and quickly swap disk drives every week without any application downtime, and do it all for less than the cost of tape, I'll buy you a beer the next time you're in my neck of the woods. Using Rodolfo's estimate of 1.3TB costing 6K, backing up just this server would cost about $240K. At 200GB per tape, this requires 265 tapes. At $65 per tape, we're eating up $17K in tape costs. We are not unusual in having multiple terabytes of storage in our data center, nor even on our system. We've got at least a half-dozen systems with multiple terabytes of usable space. We also have a lot of smaller systems that all share the same tape drives - in fact, the data is multiplexed on tape to get better throughput. Remember that I said that tape is probably not required for home. There are other alternatives for more people. Personally, I gave up on tape about 10 years ago as my storage demands increased and I couldn't justify spending money on larger drives to run unattended. Depending on your backup requirements, tape may be justfied. I don't take my personal backups offsite, and I don't keep them in a fireproof box anymore (I used to with tape). I adjusted my "requirements" to meet my financial abilities. If multiple generations of offsite data are required, tape is typically a good choice. I architect and implement highly available systems for a living. The problems are quite a bit different than what home or small business systems have. When redundant controllers and RAID-5 aren't good enough, come see me :-). I have learned from the list, however, how better to manage backups on the smaller systems since that's where most of you guys have the expertise. -- Ed Wilts, Mounds View, MN, USA mailto:[EMAIL PROTECTED] Member #1, Red Hat Community Ambassador Program -- redhat-list mailing list unsubscribe mailto:[EMAIL PROTECTED] https://www.redhat.com/mailman/listinfo/redhat-list