I'm inserting a bunch of data loaded off the network into a table. Here at the office, SQLite keeps up pretty well; at home on the cable modem, it's a huge bottleneck. Loading now takes about 10x what it used to when we were just storing in memory. Yes, I'm doing BEGIN/END around the entire transaction. I've removed indexes and set PRAGMA default_synchronous = OFF on the database, but it didn't have much effect. Oh, and I'm using a pre-compiled query to do the insert, too. I can't use a temporary table, either, because I need to share the data between the network and UI threads.

The SQLite optimization FAQ at:

http://web.utk.edu/~jplyon/sqlite/SQLite_optimization_FAQ.html

mentions turning off journaling as a last resort. I tried this by forcing 'omitJournal = 1;' at the beginning of sqliteBtreeFactory(), but it causes problems on down the line (failed assertion on pPager->journalOpen in sqlitepager_commit()). Is there another way to do this? I'd like to see if we'll even be able to get the SQLite overhead low enough to use it or if I have to start over.

This isn't a high-reliability context--if the database gets corrupted, we can just toss it and load back off the network.

Any thoughts?

-D


--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]



Reply via email to