Johan De Meersman wrote:
----- Original Message -----
From: "Claudio Nanni" <claudio.na...@gmail.com>

I think this is the best option for you:
http://www.percona.com/docs/wiki/percona-xtrabackup:start

I must say, I still haven't looked very well at xtrabackup. How does it take 
consistent backups of MyISAM tables? I didn't think that was possible without 
shutting down the applications writing to them.
I am working with both MyISAM & Innodb tables.


Adarsh, a vital piece of information is the storage engine you're using. Are your tables InnoDB or MyISAM? Afaik _*(see my question above :-p )*_
Not getting U'r point marked as bold & Underline

your approach is the only one that will allow you to take a consistent backup 
of MyISAM tables; for InnoDB tables xtrabackup should work fine.
I am not using xtrabackup but I think --single-transaction & -q options may solved this issue I know this is valid only for Innodb Tables but anyway's I have both MyISAM & Innodb tables but only Innodb tables size is increasing in seconds and MyISAM table size increased after hours.

Can U Please explain me what happened when I issue the mysqldump command with options --single-transaction & -q option on Innodb tables of size greater than 100 GB & on the other hand my application continuously insert data in the tables.

Compressed backup should take more than 2 or more Hours.
Another option that might be of interest would be taking only one full backup 
per week or month using your current procedure, and taking daily backups of the 
binary logs between those. Still no 100% guarantee of consistency, but 
everything is in there without load on your database - except for the log 
writing overhead of course - and you can do point-in-time restores up to the 
individual statement if you feel like it. Zmanda ZRM Server is one solution 
that provides that level of backup.

Please note that I don't have my bin-log enabled.

I can enable it if required.

Thanks
Come to think of it, you could use your current procedure for backing up the 
binlogs consistently, too:
 1. shut application
 2. issue "flush logs" to switch to a new binlog
 3. restart application
 4. backup all but the active binlog at your leisure for a consistent backup at 
that point in time

That would enable you to do a quick daily backup with minimal application 
downtime, and the added benefit of point-in-time restores. The downside of that 
approach is increased restore time: you need to first restore the latest full 
backup, and then incrementally apply each of the binlog backups to the point 
you need to restore to.


Yet I am not able to find the finalize the answer of the original question.

Thanks

Reply via email to