Based on my research, i found out that backing up MySQL is quite un-efficient.

On a small database, you can just dump the contents of MySQL into a file and 
then backup that file.

On a large database (several Gigs) dumping to a file should be avoided, for 
example, on a hosting service the dump could fill your hosting space. In 
addition, its time consuming and allows for database "badness" where one table 
which is already backed up changes while the current one is still locked. This 
can be avoided if you dump the database with a system-wide lock but that means 
your web sites or application will be offline for the duration of the backup 
process (and since we are dumping whole gigs of data, it can be very time 
consuming).

Running a replicated database is not ideal for hosting servers, again that will 
double your hosting space and/or entire hosting service (colocation, etc).

Based on my research, it seems the 'best' solution for really big databases is 
to use the Binary Log (mysql 4.1.3 or newer) and do incremental backups of the 
database. Thus, you do a full backup at first and then you only backup the 
Binary Log based on date ranges or snapshop "points". Ofcourse this is process 
is badly documented and i couldn't find any scripts that can help me do this.

I'd appriciate your thoughts on this.

Thank you.

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to