Thanks everyone for your feedback. I ended up doing a presort on the data, and then adding the data in order. At first I was a little concerned about how I was going to implement an external sort on a data set that huge, and realized that the unix "sort" command can handle large files, and in fact does it pretty efficiently.
So, I did a "sort -u -S 1800M fenout.txt > fenoutsort.txt" The sort took about 45 minutes, which is acceptable for me (it was much longer without the -S option to tell it to make use of more memory), and then loading the table was very efficient. Inserting all the rows into my table in sorted order took only 18 minutes. So, all in all, I can now load the table in just about an hour, which is great news for me. Thanks! Chris -- View this message in context: http://www.nabble.com/Index-creation-on-huge-table-will-never-finish.-tf3444218.html#a9618709 Sent from the SQLite mailing list archive at Nabble.com. ----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------