On Wed, Aug 7, 2013 at 2:45 PM, Clemens Ladisch <[email protected]> wrote:
> Dominique Devienne wrote: > > We can of course copy the db file somewhere else with r/w access, or copy > > the DB into an in-memory DB (for each table, create table memdb.foo as > > select * from dskdb.foo) and upgrade and read that one instead, but I was > > wondering whether there's another better solution we could use? > > You can use the backup API to copy an entire database at once: > <http://www.sqlite.org/backup.html> > Thanks. That's more efficient and less code for sure, and what we're using now. I just thought there might be a different trick possible to avoid duplicating the whole DB, like forcing the journal to be in-memory, or using WAL instead, or something. If that's the best we can do, then so be it. The reason I'd rather avoid the full copy is that the tables upgraded typically will update a tiny subset of the DB, which I've seen as large as 100 MB. So rather than duping a 100 MB disk DB into a 100 MB memory DB so I can select from it (at the proper schema version) so I can populate my C++ data structures (likely smaller than 100 MB, but still in the 20-50 MB range), that adds quite a bit of memory consumption, while the journal itself would grow to only a few MBs. Thanks again for the suggestion though. --DD _______________________________________________ sqlite-users mailing list [email protected] http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

