Andrew Witt wrote:
>
> If you can withstand the downtime, you should also be able to shut
> down the database on both ends, directly FTP the database files
> themselves, and start both databases back up.  Again, you could use
> either strict timing or the transfer of sentinel files to control
> the shutdown, copy, and startup.
>
> Using mysqldump is probably better, though, since copying over a
> backup file means that the databases don't have to be down during
> the time that the FTP takes place.

On second thought, you could also:

  shutdown the local database
  copy the database files to a temporary location on the local host
  startup the local database
  FTP those temporary files to a temporary location on the remote host
  shutdown the remote database
  copy the remote temporary files overtop of the remote database files
  startup the remote database

That minimizes the down time, by FTP'ing copies of the database files.
But, you still need at least momentary downtime.

Whether this approach or the approach using mysqldump is better
for your situation depends on other factors specific to the nature
of the data and the systems in question.  For example, the mysqldump
approach I outlined will cause the remote database to be only
partially up-to-date during the time mysqldump is re-loading the
data.  If it is critical to your website that, e.g., the various
database tables agree among themselves on specific references
between the tables, then the momentary downtime might be better
than the non-atomic update mysqldump would be performing.

HTH.

--
Andrew Witt
Sr. Software Systems Engineer
Revol  -  www.revol.us  -  Freedom is calling

Reply via email to