On Wed, Aug 7, 2013 at 12:15 PM, Dominique Devienne <[email protected]>wrote:

> On Wed, Aug 7, 2013 at 2:45 PM, Clemens Ladisch <[email protected]>
> wrote:
>
> > Dominique Devienne wrote:
> > > We can of course copy the db file somewhere else with r/w access, or
> copy
> > > the DB into an in-memory DB (for each table, create table memdb.foo as
> > > select * from dskdb.foo) and upgrade and read that one instead, but I
> was
> > > wondering whether there's another better solution we could use?
> >
> > You can use the backup API to copy an entire database at once:
> > <http://www.sqlite.org/backup.html>
> >
>
> Thanks. That's more efficient and less code for sure, and what we're using
> now.
>
> I just thought there might be a different trick possible to avoid
> duplicating the whole DB, like forcing the journal to be in-memory, or
> using WAL instead, or something.
>
> If that's the best we can do, then so be it.
>

That's probably about the best that is built-in.  However...

You could write a "shim" VFS to fake a filesystem that appears to provide
read/write semantics but which really only reads.  All writes would be
stored in memory and would be forgotten the moment you close the database
connection.

The "VFS" is an object that sets in between the SQLite core and the
operating system and provides a uniform interface to operating-system
services.  SQLite comes with built-in VFSes for unix and windows.  But you
can add additional VFSes at run-time.  A "shim" VFS is one that passes most
of the work through to one of the original VFSes but modifies some of the
requests.

So, your shim VFS would keep an in-memory record of all modified database
pages.  Read requests first check this modification cache and reply from it
if present, otherwise it passes the read request down to the original
(real) VFS.  Write requests simply update the modification cache.



>
> The reason I'd rather avoid the full copy is that the tables upgraded
> typically will update a tiny subset of the DB, which I've seen as large as
> 100 MB. So rather than duping a 100 MB disk DB into a 100 MB memory DB so I
> can select from it (at the proper schema version) so I can populate my C++
> data structures (likely smaller than 100 MB, but still in the 20-50 MB
> range), that adds quite a bit of memory consumption, while the journal
> itself would grow to only a few MBs.
>
> Thanks again for the suggestion though. --DD
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
[email protected]
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to