For the time being - I have been avoiding the VACUUM of very large DBs by
creating a new iteration of the table/DB for each transformation instead of
using UPDATE/DELETE (given that I only have 1 table per DB) -

(1) create new DB_V2 / Table_V2
(2) attach DB_V1 / Table_V1
(3) insert into Table_V2 select (column list with transformations) from
Table_V1
(4) drop DB_V1

If there are too many transformations - I just do it in a few iterations.

By using 2 seperate disks for the 2 DBs/tables - one only reads - the other
only writes - rows don't grow/shrink - and especially if both disks are
SSDs - this works quite fast and no need to VACUUM the final version.

What would make it even better would be the possibility of using CREATE
TABLE X NOLOGGING AS SELECT ... FROM Y;

Is there a way to run NOLOGGING in SQlite syntax - which means that if
something in the destination table/DB fails - you are prepared to just drop
it and start over?

Thanks !

On Thu, Mar 22, 2012 at 3:06 PM, Scott Hess <[email protected]> wrote:

> On Tue, Mar 20, 2012 at 8:25 PM, Jay A. Kreibich <[email protected]> wrote:
> > On Tue, Mar 20, 2012 at 01:59:59PM -0700, Udi Karni scratched on the
> wall:
> >> Is there a way to go directory from "original" to "journal/final" -
> >> skipping the creation of the Temp version?
> >
> >  No, it requires all three copies.
> <...>
> >  Almost exactly two years ago I proposed a "VACUUM TO <filename>"
> >  version of the command that did the first copy and then quit.  Rather
> >  than building an optimized temp copy, VACUUM TO would copy the
> >  current database to an optimized named file (rather than a temp file),
> >  and then skip the copy-back stage.  This would allow a system admin
> >  to shut down all database users, VACUUM the database, swap files,
> >  and finally restart everything.  The process would require more
> >  manual work, but would only require 2x the drive space, rather than
> >  3x.  Nobody spoke up about the idea, however.
>
> I think you could manage 2x-the-drive-space without shutdown by
> writing a read-only VFS which treated the pages in the journal as its
> backing store, faulting missed through to the main file.  Then you
> could VACUUM from the database-in-the-journal to the
> database-in-the-database.  In case of failure, the journal rolls
> things back like you'd expect.
>
> I _think_ this would work.
>
> -scott
>  _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to