On Fri, Sep 19, 2003 at 01:42:10PM +0800, Sacha Chua wrote: > Michael Chaney <[EMAIL PROTECTED]> writes: > > >> yeah ive tried doing scheduled mysqldump and scp but > >> dumped files are very big to transfer, so i think that > >> doing a replication will only copy or update latest query > > Here's how I do replication. At a set time interval, the db table is > > dumped, sorted by all keys. Then I simply run a diff, and if it's the > > ... I had a sneaky feeling that I've run across docs for this before, > and Googling for mysql replication turns up > http://www.mysql.com/doc/en/Replication.html . > > This basically allows you to keep slaves sync'd with a master by > keeping a binary transaction log. Slaves can be distributed over the > Internet. Might be fun.
I have trouble trusting the MySQL developers to pull that one off without screwing it up. Added to that is the fact that they don't do real logging, seems like network outages could cause problems. I create a minimal set of diffs and make sure the diff generation is completely separate from the network transport. All in all it should be a more stable system. Michael -- Michael Darrin Chaney [EMAIL PROTECTED] http://www.michaelchaney.com/ -- Philippine Linux Users' Group (PLUG) Mailing List [EMAIL PROTECTED] (#PLUG @ irc.free.net.ph) Official Website: http://plug.linux.org.ph Searchable Archives: http://marc.free.net.ph . To leave, go to http://lists.q-linux.com/mailman/listinfo/plug . Are you a Linux newbie? To join the newbie list, go to http://lists.q-linux.com/mailman/listinfo/ph-linux-newbie
