Hellow folks,

At $work we have an application that processes *huge* (tens of millions of
rows in some of the larger tables, sometimes over 30GiB file size). This
application changes and when it does, it drops some tables and calculates
them again. What is somewhat surprising is that dropping the tables itself
takes quite long (order of minutes) time.

 - What is the reason it might take that long? I didn't expect removing the
   table entry in sqlite_master and adding it's pages to the free list to
   take that long.
 - Is there any way to speed it up? The application works in big tasks, each
   of which opens a transaction and creates one or few tables, dropping any
   old versions of those tables first. So could perhaps moving the drops out
   of the transaction help? It would be correct, once the table is found
   obsolete, it would be found obsolete after rollback and retry again, but
   it would take quite a bit of refactoring, so I'd only do it if it's likely
   to help significantly.

Thanks,
Jan

-- 
                                                 Jan 'Bulb' Hudec <b...@ucw.cz>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to