On 06/01/2011 02:18 AM, Jan Hudec wrote:
> Hellow folks,
>
> At $work we have an application that processes *huge* (tens of millions of
> rows in some of the larger tables, sometimes over 30GiB file size). This
> application changes and when it does, it drops some tables and calculates
> them again. What is somewhat surprising is that dropping the tables itself
> takes quite long (order of minutes) time.
>
>   - What is the reason it might take that long? I didn't expect removing the
>     table entry in sqlite_master and adding it's pages to the free list to
>     take that long.
>   - Is there any way to speed it up? The application works in big tasks, each
>     of which opens a transaction and creates one or few tables, dropping any
>     old versions of those tables first. So could perhaps moving the drops out
>     of the transaction help? It would be correct, once the table is found
>     obsolete, it would be found obsolete after rollback and retry again, but
>     it would take quite a bit of refactoring, so I'd only do it if it's likely
>     to help significantly.

If you have foreign-keys enabled (and one or more FK's that involve
the table being dropped), that can slow things down. If this is
the case, try using the pragma to disable FKs before running the
DROP TABLE.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to