If you use the SQLite backup api to 'backup' (save) the in-memory database
to the disk database, an atomic transaction is used to write the data to
the destination database. This means that all of SQLites usual guarantees
apply: If the operation completes, the disk database will contain a
complete, consistent copy of the in-memory database; If the operation does
not complete the next program to open the disk database will see the hot
journal and roll it back, and see a complete, consistent version of the
on-disk database before you started the backup procedure.

Note that if the database and its journal are separated, or the journal is
deleted, then this guarantee is lost and the database is corrupt. In this
way the backup API isn't much better than your other idea of copy then save.

If performance is important, this is one of the few cases when you will be
much better off using a rollback journal than a WAL journal: If you are
using a rollback journal the final COMMIT merely consists of deleting the
journal, in WAL mode all changes must be written many times.

If performance is critical and you intend to overwrite, then you can
perform VACUUM INTO a new database to disk the use OS calls to replace the
original database. Pretty sure this would be quicker than the backup API,
but you'd have to have custom rollback logic. You'd have to check if your
an OS rename and overwrite is atomic. Not sure on this one, if someone on
the list knows I'm wrong on please correct me.

You mentioned wanting to just be able to track the changes to the in-memory
database, and commit them to disk when the user presses save. SQLite has an
extension to do the hard work of this for you:
https://www.sqlite.org/sessionintro.html . You still get transactions with
all of your atomic guarantees of an always consistent database.

--------------

Off topic: There are many good reasons to use an in-memory SQLite database
for what ardi wants to do. Some of them are:
 - The biggest: You get to query your data using SQL
 - You get constraint enforcement
 - You get indices
 - You don't need to write much code to differentiate between whether
working on the in-memory db copy or an on-disk database (or indeed another
db engine)
 - Client expectations of 'open' and 'save' behaviour prevent you from
writing directly to the on-disk database.

For performance reasons I've had to write custom code where I have changing
data that I need to keep 'indices' on. I found it very difficult to get it
right, and very fragile. If I didn't need the performance, I would much
rather have used an in-memory DB (sadly it just wasn't fast enough).

On Sun, 14 Jul 2019 at 22:26, Jens Alfke <j...@mooseyard.com> wrote:

>
> > On Jul 14, 2019, at 10:05 AM, ardi <ardillasdelmo...@gmail.com> wrote:
> >
> > Do you have any recommendation for saving the inmemory db in a safe
> > way?
>
> If you’re going to keep your data in memory, there’s no good reason to use
> SQLite at all. Just define custom model objects that you can operate on
> efficiently, and write the data out in a standard format like JSON or XML.
> (Most frameworks have serialization features that make this easy.)
>
> The point of using a database is that you _don’t_ have all the data in
> memory, and can still access it quickly. This is attractive if you want to
> scale to large data sets that won’t fit in RAM or which take too long to
> read/write from storage. If that’s not an issue for you, don’t use a
> database; it just adds more complexity to your data model layer.
>
> —Jens
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to