There are 3 things [that are exclusivly MyISAM}. Full Text index autoincrement column as the last column in a multi column primary key MERGE tables
Tables which don't require these features can be INNODB tables. We have a few tables that use these, otherwise we are exclusively INNODB. There is another way to do backups. It's what we use. Capture the file names in your database Extract the .frm files and build "select into outfile" and coresponding "load data infile" statements for each file name {i.e.table} Sample select * into outfile '/usr/data/mailprint/day/user.txt' from mysql.user; load data infile 'user.txt' ignore into table user; put the select statements in a file with flush logs before and after the set of select statements run the file through a CRON as mysql -h .......< filename Now we have a text file for each table in a directory which we can zip/tar, move to a different machine snd selectively restore the tables via the load data commands, restore a single table in a "test" database and recover/rebuild a specific tabel in the live database, etc. Our 4GB database takes < 5 minutes to save every night. A full restore takes <30 minutes including moving the data files to the right place. We keep 1 month of the nightly copies and day 1 of each month for a year. We have development, stage and live servers and started doing this to give us better granularity for selectively synching tables or parts of tables. {I need these 500 rows from this table to move the dev project to staging}. It has also been invaluable in the time when a developer was on the "wrong" server and inadvertantly corupted an entire column of the user table. We did not want to take down the site and do a restore/rollforward to right before the "stupid" command was executed. We just needed to fix the data in this one column in one table. Sorry I started to ramble. Oh I almost forgot, we also periodically dump the structure with mysqldump --no-data to capture the structure. Our table defintitions are relatively stable so we don't do it every night. You could put it in the cron job to do it with the backup. -----Original Message----- From: Scott Plumlee [mailto:[EMAIL PROTECTED] Sent: Friday, June 17, 2005 12:36 PM To: mysql@lists.mysql.com Subject: Re: Backup database with MyISAM and InnoDB tables together > > -----Original Message----- > From: Scott Plumlee [mailto:[EMAIL PROTECTED] > Sent: Friday, June 17, 2005 10:21 AM > To: mysql@lists.mysql.com > Subject: Backup database with MyISAM and InnoDB tables together > > I'm not clear on best practice to use on a database containing both > MyISAM and InnoDB tables. For the MyISAM tables, it seems better to use > > mysqldump --opt, thus getting the --lock-tables option, but for the > InnoDB the --single-transaction is preferred. Since they are mutually > exclusive, is there a best practice to get consistent state of the > tables when the database dump is performed? > > Would `mysqldump --opt --skip-lock-tables --single-transaction` be best > for a database that is mostly InnoDB tables, but does have a few MyISAM > tables? > > WOuld I be better off locking the database from any updates/inserts, and > > specifying particular commands for individual tables? > > Any advice appreciated, including RTFMs with links. > > > Gordon Bruce wrote: > If you are runing binary log and do a > > FLUSH LOGS > mysqldump --opt --skip-lock-tables MyISAM table names > FLUSH LOGS > mysqldump --opt --single-transaction INNODB table names > > You have a recoverable state with the combination of the mysqldump file > and the binary log file that was started by the 1st FLUSH LOGS command. > The recovered database wil be restored to the date time that the 2nd > FLUSH LOGS command was issued instead of the start time of the backup, > but you won't have to lock all of your tables and it wil give you a > consistent state across a mixed INNODB MyISAM environment. > > The downside is > -you have 3 files to deal with > -you have to maintain the table names in the mysqldump commands > -you have a small risk of a change ocurring in the MyISAM > tables between time the 2nd FLUSH LOGS is executed and the 2nd > mysqldump command is executed Thanks for the tip. I haven't looked into binary logs too much, just learned about them the other day when I had to correct my own mistake and restore a table. Is is best practice to go with tables of all one sort to allow for consistent state when doing backups like this, or are mixed tables the norm in most databases? I went with the InnoDB in order to not have to do row level locking on the tables as transactions were performed and I've been very pleased with the results. I might consider just going with all InnoDB to make it easy, as those are the majority of my tables in this case. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]