At 11:44 AM 7/23/2007, Sid Lane wrote:
all,

I need to migrate ~12GB of data from an Oracle 10 database to a MySQL
5.0one in as short a window as practically possible (throw tablespace
in r/o,
migrate data & repoint web servers - every minute counts).

the two approaches I am considering are:

1.  write a program that outputs the Oracle data to a fifo pipe (mknod) and
running a "load data infile" against it

2.  write a program that dynamically builds extended insert statements up to
length of max_allowed_packet (similar to mysqldump -e)

is either one significantly faster than the other?  I know I could benchmark
it but I was hoping someone could save me writing #2 to find out if it's not
the way to go...

are there additional (faster) approaches I have not thought of?

FWIW these are 95% innodb (5% myisam are static reference tables & can be
done in advance).

thanks!

Load data will of course be much faster. However to obtain the maximum speed you need to load the data to an empty table, because then MySQL will load the data without updating the index for every row that's added, and will instead rebuild the index only after all data has been loaded, which of course is much faster. The other alternative is to remove all indexes prior to using load data and build the index manually when the data has been loaded. (Unique indexes may pose a problem because it will throw an error if it finds a duplicate index- but if you verified uniqueness in Oracle, then it won't give you a problem)

The big drawback with Load Data is of course there is no exception log kept. You only know the number of rows added. With Insert you at least have a chance to log the errors yourself.

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to