An interesting approach would be to use some sort of async I/O facility to implement read-ahead.
Short of that, I have found that in some cases, on some operating systems, implementing explicit read-ahead buffering for fts2 segment merges improves performance when the disk caches are cold. Linux kernel 2.6 seems to get no benefit, 2.4 gets more. This is somewhat of a special case, though, as fts2 segment merges are merging streams from different locations together, like an external sort. -scott On 3/26/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
--- Joe Wilson <[EMAIL PROTECTED]> wrote: > > improved dramatically. So I attempted the creation of the index off hours on > > the production system, and after 4 hours no index. I can't detect any > > activity at all. The journal file and the .db file just sit at the same size > > for 4 hours. Why is this failing? It seems like it is just sitting there > > doing nothing. When I created the test index, I noticed the journal file > > changing and the .db file changing during the 2.5 hours to create. On the > > production .db file, nothing is happening. I have all associated processes > > killed that ineract with the db file, so I know it is not locked. > > I assume that the copied "test" database was indexed immediately after its > creation. If this was the case then the entire file may have been in the OS > cache resulting in very quick indexing. Try running "wc prod.db" or > "cat prod.db >/dev/null" and then creating the indexes on prod.db to see > what happens. The original poster confirmed that cat'ting the file to /dev/null reduced index creation time to 2.5 hours on the original database file. Could some optional heuristic be incorporated into SQLite's pager to do something similar for such large transactions and/or queries? ____________________________________________________________________________________ Need Mail bonding? Go to the Yahoo! Mail Q&A for great tips from Yahoo! Answers users. http://answers.yahoo.com/dir/?link=list&sid=396546091 ----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------
----------------------------------------------------------------------------- To unsubscribe, send email to [EMAIL PROTECTED] -----------------------------------------------------------------------------