On Tue, May 31, 2011 at 16:44:13 -0500, Nico Williams wrote:
> On Tue, May 31, 2011 at 4:22 PM, Simon Slavin <slav...@bigfraud.org> wrote:
> > Split the DROP into two stages:
> >
> > DELETE FROM myTable;
> > DROP TABLE myTable;
> >
> > Which one takes all the time ?  If it's the second one, then perhaps just 
> > delete all the records.  Filling the table back up again with new rows 
> > obviously already takes considerable time so adding extra time may not be 
> > as inconvenient.

The first one alone takes more time than the drop alone did before (already
tried some time ago).

> That's gonna have the same problem: in order to find out what pages to
> add to the free list the engine will have to traverse the interior
> nodes of the table's b-tree.
> 
> You could rename the table and then delete from ... order by rowid asc
> limit 1000, to delete 1000 rows at a time.  Add in incremental
> autovacuum and that might do the trick.

The autovacuum would just add more work reshuffling the data around. And
it's not that the pages are going to be unused long.

> You could also re-create the DB in a new file then use the filesystem
> to delete the old file -- the filesystem will have to do the same work
> to free filesystem blocks -more or less-, but it will do it in the
> asynchronously, in the background.

I am deleting one table, but there are many, many others in the file that
I need. Though I already thought about having each table in it's own file and
attach the ones I need (the number of attached databases is limited, but
I don't think single statement ever uses more than 6 or 7 tables, so it could
be doable. Except it would be a lot of work to do it.

> Personally I recommend Roger's VIEW approach to schema changes to
> minimize data re-write overhead.

Well, besides there being many tables that are written from code (though the
data still come from the database, some complex transformations are involved)
the main reason is, that when table X changes, I need to rebuild that table
and the tables that depend on it, but not the tables it depends on. If it
depended on views instead, I would have to reread them. And each join
involved does make the query a bit slower (especially when joining huge
tables like each query would if I used views) it would quickly become
unbearable.

-- 
                                                 Jan 'Bulb' Hudec <b...@ucw.cz>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to