On Tue, 4 Mar 2014 23:13:51 +1100, Nik Sands wrote:
The conversion process is a little slow for large data sets
Hi Nik, SQLite is a transactional DBMS and consequently your approach to transaction handling has a very strong impact on write performances. just as a quick and dirty rule-of-the-thumb: performing a single COMMIT every 64K inserted rows is surely better than performing 64 distinct COMMITs one every 1K inserted rows. an even more aggressive approach: just completely disable SQLite's journal and transaction support when you are planning to import a very huge dataset. you simply have to execute the following PRAGMA directive before starting your import process: PRAGMA journal_mode=OFF; be very careful; completely disabling the journal could have the very unpleasant consequence that if unhappily some SQL error occurs during the import your target DB-file will be quite surely irremediably corrupted. but this isn't usually a big issue, at least until you can safely restart the import process from scratch by creating a brand new empty DB-file. a further possible optimization; the default SQLite's cache could be too small to adequately support an huge number of INSERTs while updating the Spatial Index at the same time. in this case explicitly setting a bigger cache usually have strong performance benefits: PRAGMA cache_size=-500000; please note: SQLite intends any *negative* size to be expressed in KB; so the above directive will cause SQLite to allocate as much as 500 MB for its cache; and this will obviously dramatically reduce the number of physical I/O operations. bye Sandro _______________________________________________ gdal-dev mailing list [email protected] http://lists.osgeo.org/mailman/listinfo/gdal-dev
