On 5 Oct 2016, at 1:07pm, Eric Grange wrote:
> I have added SQLITE_OPEN_EXCLUSIVE to the flags of the backup db.
That's a very good idea. Might add that to some tips I use elsewhere. Thanks.
Simon.
___
sqlite-users mailing list
> But yes, if you have an application which uses only the backup API it
should be safe.
I have added SQLITE_OPEN_EXCLUSIVE to the flags of the backup db. That
should cover accidental cases where the backup db is opened before or
during the backup by "something else" ? It should never happen in my
Eric Grange wrote:
> Ran some tests with variations of Clemens's backup exemple, and adding
>
> sqlite3_exec(src, "PRAGMA cache_size=1", NULL, NULL, NULL);
> sqlite3_exec(dst, "PRAGMA cache_size=1", NULL, NULL, NULL);
>
> seems to provide the best performance [...]
>
> While the
On 5 Oct 2016, at 8:46am, Eric Grange wrote:
>sqlite3_exec(dst, "PRAGMA synchronous=OFF", NULL, NULL, NULL);
Don't do that for any operation outside taking the backup. It's a major cause
of corruption in any situation where two different connections (in the same
Eric Grange wrote:
>> No, this is what makes the backup restart. With a step size of -1, it
>> would never restart.
>
> Hmm, given WAL mode, this brings another question: why the sqlite CLI does
> not do that but uses 100 pages?
Because it's much older than WAL mode.
> Is using a backup_step N
Ran some tests with variations of Clemens's backup exemple, and adding
sqlite3_exec(src, "PRAGMA cache_size=1", NULL, NULL, NULL);
sqlite3_exec(dst, "PRAGMA synchronous=OFF", NULL, NULL, NULL);
sqlite3_exec(dst, "PRAGMA cache_size=1", NULL, NULL, NULL);
seems to provide
Thanks for the tool Clemens!
> No, this is what makes the backup restart. With a step size of -1, it
would never restart.
Hmm, given WAL mode, this brings another question: why the sqlite CLI does
not do that but uses 100 pages?
Is using a backup_step N parameter only useful if you want to
Eric Grange wrote:
> If all else fail, I could also suspend DB writes during backups (suspending
> DB reads would be more problematic).
With WAL, the backup reader does not block writers.
>> Use the backup API, and copy everything in one step.
>> (The restart-on-write feature should not be
On 4 Oct 2016, at 3:58pm, Eric Grange wrote:
> If all else fail, I could also suspend DB writes during backups
If it's easy to do that, then I'd recommend it. Used in conjunction with the
SQLite backup API it should provide the simplest solution least likely to
present a
> Have you looked at the ext/session extension ?
Yes, but it is a bit more complicated to integrate, and would impose a
penalty during execution as far as I understand: there are quite a few
intermediate states that would be stored by a changeset, but that do not
really need to be preserved at
On 4 Oct 2016, at 3:05pm, Simon Slavin wrote:
> No. You have no locking. You might copy the beginning of the file before a
> transaction and the end of the file after it, meaning that pointers at one
> part of the file point to things which no longer exist.
I'm sorry.
Eric Grange wrote:
> Given a fairly large database (dozens of gigabytes), which uses WAL, and is
> being accessed continuously (mostly read transactions, but regular write
> transactions as well), what are the fastest and less disruptive ways to
> back it up?
Use the backup API, and copy
On 4 Oct 2016, at 2:53pm, Eric Grange wrote:
> I am going on the assumption that if something fails during backup, the
> backup itself will be toast anyway, but is that safe otherwise?
No. You have no locking. You might copy the beginning of the file before a
transaction
Hello Eric !
Have you looked at the ext/session extension ?
It seems that it can be the answer for your problem. The basic idea is
you'll create a function that will receive the changes made to the
database and then you can incrementally apply then on the backup database.
This way the
Hi,
Given a fairly large database (dozens of gigabytes), which uses WAL, and is
being accessed continuously (mostly read transactions, but regular write
transactions as well), what are the fastest and less disruptive ways to
back it up?
A basic ".backup" from the CLI can occasionnally take
15 matches
Mail list logo