Just a guess: finding all the pages to free requires traversing the
internal nodes of the table's b-tree, which requires reading a fair
subset of the table's b-tree, which might be a lot of I/O.  At 150MB/s
it would take almost two minutes to read 15GB of b-tree pages from a
single disk, and that's assuming the I/Os are sequential (which they
will almost certainly not be).  So you can see why the drops might be
slow.

One workaround would be to rename the tables to drop and dropping them
later, when you can spare the time.

Longer term it'd be nice if SQLite3 could free a dropped table's pages
incrementally rather than all at once, assuming my guess above is
correct anyways.

Nico
--
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to