looking at https://sqlite.org/backup.html <https://sqlite.org/backup.html> 
(extract below; my emphasis), the Backup API restarts the backup if an update 
(not a read) occurs during the backup → might silently never complete if backup 
takes longer than archive interval.
this could be dealt with by aborting the backup if it runs into end of archive 
interval → tell user to use some other backup method

did a couple of simple timing tests.
RPi 4B (SD card), weewx.sdb 104.5MB (381k records, dunno how many ‘pages'), no 
progress reporting:
  1 page at a time, default 25 msec delays: backup took 10.5 secs elapsed
  10 pages ditto: 7.9 secs elapsed
  100 pages ditto: 7.0 secs elapsed
  1000 pages ditto: 6.7 secs elapsed
conclusion: only seriously under-powered boxes would be unable to complete 
within typical 300 sec archive interval.
would be good if someone with such a box gave it a try


File and Database Connection Locking

During the 250 ms sleep in step 3 above, no read-lock is held on the database 
file and the mutex associated with pDb is not held. This allows other threads 
to use database connection <https://sqlite.org/c3ref/sqlite3.html> pDb and 
other connections to write to the underlying database file.

If another thread or process writes to the source database while this function 
is sleeping, then SQLite detects this and usually restarts the backup process 
when sqlite3_backup_step() is next called. There is one exception to this rule: 
If the source database is not an in-memory database, and the write is performed 
from within the same process as the backup operation and uses the same database 
handle (pDb), then the destination database (the one opened using connection 
pFile) is automatically updated along with the source. The backup process may 
then be continued after the sqlite3_sleep() call returns as if nothing had 
happened.

Whether or not the backup process is restarted as a result of writes to the 
source database mid-backup, the user can be sure that when the backup operation 
is completed the backup database contains a consistent and up-to-date snapshot 
of the original. However:

Writes to an in-memory source database, or writes to a file-based source 
database by an external process or thread using a database connection other 
than pDb are significantly more expensive than writes made to a file-based 
source database using pDb (as the entire backup operation must be restarted in 
the former two cases).
If the backup process is restarted frequently enough it may never run to 
completion and the backupDb() function may never return.




> On 11 Jan 2021, at 2:22 am, Tom Keffer <tkef...@gmail.com> wrote:
> 
> If the backup takes long enough, it could interfere with writing a record to 
> the database. Eventually, the write will time out, causing weewxd to restart 
> from the top. It won't crash weewxd (that is, cause it to exit), nor corrupt 
> the database, but the record would be lost. 
> 
> That's the advantage of the incremental backup approach. The backup process 
> never holds a lock on the database for very long. Just a second or two.
> 
> BTW, the backup API 
> <https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.backup> is 
> now included in Python starting with V3.7. If you have a modern version of 
> Python, you don't have to write a C program to access the API.
> 
> I'm hoping to inspire someone to write a simple script that would run once a 
> day using the backup API.
> 
> -tk

-- 
You received this message because you are subscribed to the Google Groups 
"weewx-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to weewx-user+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/weewx-user/E7A3C0BE-A408-464E-B43F-87E0DEEA6E0C%40gmail.com.

Reply via email to