With new HP ultrium tape drives, you can get 200GB/hr transfer rate. I kind of hate tapes (just like everybody else), but tapes have really improved in the past few years. These things are under $6k and could back up 1-2 TB overnight without much problem. With a library (MSL6060), you can have 4 drives and 60 tapes for 12 TB backup.
I agree with the idea of skipping tape backup altogether, but that's only if the data is reconstructable in a worst case scenario or if the value of the lost data times the chance of it going down is not worth more than the catastrophic failure backup cost. > -----Original Message----- > From: Andrew Braithwaite [mailto:[EMAIL PROTECTED] > Sent: Wednesday, July 23, 2003 6:28 PM > To: 'Joe Shear' > Cc: [EMAIL PROTECTED] > Subject: RE: large mysql/innodb databases > > > >>Power problems are handled by our colo facility, we want to quickly > restore for most hardware problems (disk/machine failures). > > Just have multiple inexpensive fully replicated servers with > failover built > into the application layer (that's what we do) - Individual > machines can go > down and the service still stays up. When those dead boxes > recover, they > can catch up from the replication logs and go back into service.. > > >>On a periodic basis, we will take a snapshot using innodb > hotbackup of the > master machine that will go to a third box with a bunch of > big raid-5 ide > drives. We were planning on starting with NFS for the short term since > innodb hot backup doesn't go over the network and figure > something else out > later. > > That's a good idea - my findings were that NFS was really > slow and the best > solution was to backup from a fully replicated slave (after it had > temporarily stopped replicating) by piping the raw data files > through tar > and gzip (appropriate for you as you're not concerned abou > cpu) to a backup > big raid-5 ide server. > > >>One issue we have is that we are trying to plan out our > setup for storing > a total of about 25TB of data and we are trying to find the > lowest cost > solution, with decent reliability. > > And I'm trying to find the secret of eternal youth :) > > Cheers, > > Andrew > > > > > -----Original Message----- > From: Joe Shear [mailto:[EMAIL PROTECTED] > Sent: Wednesday 23 July 2003 22:51 > To: Andrew Braithwaite > Cc: [EMAIL PROTECTED] > Subject: RE: large mysql/innodb databases > > > We don't expect recovery to be shorter than the time it takes for the > hardware to copy the data over. Restoring from tape should > be a solution > that is only needed in the case of a severe problem. Power > problems are > handled by our colo facility, we want to quickly restore for > most hardware > problems (disk/machine failures). > > We don't actually store any archive/aggregate information. > Everything we > store on the main databases is used on a relatively constant basis. > > What we are currently thinking about doing right now is > having an identical > master and slave, each with about 500 gigs (later these will > be at about 1TB > each). On a periodic basis, we will take a snapshot using > innodb hotbackup > of the master machine that will go to a third box with a > bunch of big raid-5 > ide drives. We were planning on starting with NFS for the > short term since > innodb hot backup doesn't go over the network and figure > something else out > later. This machine would then shutdown the slave, copy over the new > snapshot, and restart replication at the point from the point > that innodb > hotbackup started running at. > We would also take the snapshot from the IDE box, and write > it to tape at > this point. Any thoughts on this? What are you doing? > > One issue we have is that we are trying to plan out our setup > for storing a > total of about 25TB of data and we are trying to find the lowest cost > solution, with decent reliability. > > On Wed, 2003-07-23 at 14:33, Andrew Braithwaite wrote: > > Hi, > > > > I'm afraid that with that amount of data and having a few huge > > constantly updated tables will result in huge restore times for > > disaster recovery (just untaring/copying backups of the > magnitude of > > terabytes back to the live environment will take hours and hours..) > > > > You're talking "massive enterprise sized solutions" and "we're on a > > budget" in the same sentence (which are not compatible with each > > other) - I know because we are the same here! > > > > A couple of things I can suggest: > > > > 1. Redesign your applications so that you archive/aggregate > > information that will never be used again. > > > > 2. Write a function that will backup the "often changed" stuff on a > > daily basis and backup the seldom changed stuff on a weekly basis. > > (as you're on a budget use a few inexpensive IDE raid 5 > linux boxes - > > 6 x 250GB = 1.25 TB for backup) > > > > 3. Put in place a replication system that is so resilient that how > > ever many machines go down, there will still be plenty of fully > > replicated servers to satisfy the demand. Make sure that > you have UPS > > so that if the power fails you can get a clean shutdown. And ignore > > backups completely. > > > > Hope this helps, > > > > Andrew > > > > > > > > -----Original Message----- > > From: Joe Shear [mailto:[EMAIL PROTECTED] > > Sent: Wednesday 23 July 2003 21:50 > > To: Andrew Braithwaite > > Cc: [EMAIL PROTECTED] > > Subject: RE: large mysql/innodb databases > > > > > > The data is constantly updated. There are 3 or 4 huge tables, and > > several smaller tables. We would love to have an > incremental solution > > that is > > *guaranteed* to be correct, but we haven't found a way to > do that, so what > > we've been thing is we'd do a complete snapshot once a week, and do > > incremental backups of one form or another every day. The > replicated > slave > > is allowed to stop replicating during backup. There is no absolute > > requirement on the time needed to restore. We'd like most disaster > recovery > > to go fairly quickly, but we realize that on our budget, > that a major > > disaster could cause us fairly significant downtime. > > > > On Wed, 2003-07-23 at 13:43, Andrew Braithwaite wrote: > > > Hi, > > > > > > We have similar numbers here. > > > > > > A couple of questions: > > > > > > - are they logfiles that could be rolled over on a daily basis or > > > are > > > they constantly updated huge tables? > > > > > > - is the type of backup you want incremental or a daily/weekly > > > snapshot one? > > > > > > - do you have a requirement for the speed of restore needed in the > > > case of disaster recovery? > > > > > > - is the replicated slave allowed to stop replicating whilst the > > > backup is being performed? > > > > > > Let me know and I think I'll be able to help :) > > > > > > Cheers, > > > > > > Andrew > > > > > > > > > -----Original Message----- > > > From: Joe Shear [mailto:[EMAIL PROTECTED] > > > Sent: Wednesday 23 July 2003 21:08 > > > To: [EMAIL PROTECTED] > > > Subject: large mysql/innodb databases > > > > > > > > > I was wondering if anyone had any experience with setting > up large > > > and > > > fairly high performance databases. We are looking at setting up > > > databases with each machine having somewhere between 500 > gigs and 2 > > > terabytes along with a slave box and we'd like to backup > everything to > > > tape at a minimum of once a week, but if possible, daily. > We're also > > > looking at central storage solutions. However, we're > hesitant because > > > that will result in a (very > > > expensive) single point of failure. Of course, we could > buy 2, but they > > are > > > fairly expensive. Has anyone had any experience with setups like > > > this? What kind of backup solutions did you use? We aren't too > > > concerned about the CPU usage as our databases tend to be > i/o bound. > > > -- > > > Joe Shear <[EMAIL PROTECTED]> > -- > Joe Shear <[EMAIL PROTECTED]> > -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]