Hi,

Given a fairly large database (dozens of gigabytes), which uses WAL, and is
being accessed continuously (mostly read transactions, but regular write
transactions as well), what are the fastest and less disruptive ways to
back it up?

A basic ".backup" from the CLI can occasionnally take hours, as it is
thrown thrown off whenever large write transactions occur.

I have found the following approaches to seem to work, but would like a
confirmation:
- using temp_store = MEMORY
- using synchronous = 0
- using a cache_size as large a possible (as high as possible without
running out of memory)

I am going on the assumption that if something fails during backup, the
backup itself will be toast anyway, but is that safe otherwise?

Also the precompiled CLI for Windows (sqlite3.exe) of 3.14.2 fails at
around 2 GB RAM, which in my case is about 500,000 pages, is that already
too high or could it be worth going with a 64bits CLI?

Thanks!

Eric
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to