Hello,
We want to be able to insert records into a table containing a billion records in a timely fashion.
The table has one primary key, which I understand is implemented using B-trees, causing insertion to slow by log N.
The key field is an auto_increment field.
The table is never joined to other tables.
Is there any way we could implement the index ourselves, by modifying the MyISAM table handler perhaps? Or writing our own?
In our setup record n is always the nth record that was inserted in the table, it would be nice to just skip n * recordsize to get to the record.
Also, could someone shed some light on how B-tree indexes work. Do they behave well when values passed in are sequential (1, 2, 3, ...) rather than random values?
Thanks in advance, -Phil
Phil,
The fastest method to load data into a table is to use "Load Data Infile". If the table is empty when the command is executed, then the index is not updated until after the command completes. Otherwise if you are loading a lot of data, you may want to drop the index and rebuild it later. Unfortunately "Alter Table table_name disable keys" won't work on unique indexes (primary).
Mike
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]