Re: [CentOS] best ways to do mysql backup
Nic by się nie stało gdyby Agnello George nie napisał: we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help Why not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__. -- Tuptus ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
why not mysqldump + binlog + rsync? Tang Jianwei On 08/15/2010 03:51 AM, Agnello George wrote: we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] best ways to do mysql backup
we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help -- Regards Agnello D'souza ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
On 08/14/2010 12:51 PM, Agnello George wrote: we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help Assuming you installed using LVM partitions (and that you left space for snapshots ;) ), stop the database, take a LVM snapshot, restart the database, rsync the mysql data directory to the other machine, then release the snapshot. -- Benjamin Franz ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
On 08/14/2010 12:59 PM, Benjamin Franz wrote: On 08/14/2010 12:51 PM, Agnello George wrote: we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help Assuming you installed using LVM partitions (and that you left space for snapshots ;) ), stop the database, take a LVM snapshot, restart the database, rsync the mysql data directory to the other machine, then release the snapshot. Correction: rsync the *snapshot* of the mysql data directory to the other machine. -- Benjamin Franz ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 14/08/2010 21:51, Agnello George wrote: we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump The problem i encountered with this kind of stuff is that huge backups take a lot of time and that corresponds possibly (depends of used backends maybe) to long downtime of the production instance. I generally dont find it feasible to do backups on production instances. Instead I suggest to set up a slave instance on another machine. The Backup Procedure would roughly be like this: * stop slave instance * make lvm snapshot * start slave instance again * do whatever you want with the snapshot data (sorry!) The Advantages as i can see are: Backup peak I/O is moved away from the production instance. Of course you have the constant replication overhead. As a variation you could only start the replication thread periodically. The production instance has not to be shut off, e.g. is not affected directly by backups. The slave instance could act as failover. Regards, Markus -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxm/xoACgkQYoWFBIJE9eVDlwCgv5Rnvs8P9/AS2iXSvnuqcgRv yH0AnjpMeYWDzYw8t47nEtvkF7OnT/bz =l7Av -END PGP SIGNATURE- ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
On Sun, 15 Aug 2010, Agnello George wrote: To: CentOS mailing list centos@centos.org, li...@yahoogroups.com From: Agnello George agnello.dso...@gmail.com Subject: [CentOS] best ways to do mysql backup we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help Would there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time? Kind Regards, Keith Roberts - Websites: http://www.php-debuggers.net http://www.karsites.net http://www.raised-from-the-dead.org.uk All email addresses are challenge-response protected with TMDA [http://tmda.net] - ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
Keith Roberts wrote: On Sun, 15 Aug 2010, Agnello George wrote: To: CentOS mailing list centos@centos.org, li...@yahoogroups.com From: Agnello George agnello.dso...@gmail.com Subject: [CentOS] best ways to do mysql backup we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help Would there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time? Kind Regards, Keith Roberts For uninterrupted delivery of dynamic content from the database... or no downtime, replication to a slave is the way to go. This is 'sort of' a T-ing effect, except it is to another database. That slave database however can be stopped, a mysgldump done to a backup and then restarted, at which point the replication restarts and the slave database is updated to match the master database. It works really well without huge overhead increases. Google MySQL replication for lots of info about setting it up. John Hinton ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] best ways to do mysql backup
On 08/14/2010 03:58 PM, John Hinton wrote: For uninterrupted delivery of dynamic content from the database... or no downtime, replication to a slave is the way to go. This is 'sort of' a T-ing effect, except it is to another database. That slave database however can be stopped, a mysgldump done to a backup and then restarted, at which point the replication restarts and the slave database is updated to match the master database. It works really well without huge overhead increases. Google MySQL replication for lots of info about setting it up. I didn't include this since the OP specified _other_ than tar, replication and mysqldump. But the most efficient and lowest downtime is to combine replication with the LVM snapshot and rsync. Since the OP specified he has around 100 GB of data, rsyncing the snapshotted data directory will be substantially more efficient than using mysqldump for transfer to the remote system (assuming he doesn't churn most of the contents of the database between syncs). -- Benjamin Franz ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos