That'd depend on the size of the database, the medias speed you're writing to, and what kind of actions are happening during the backup.
If your file size is in the GB range, and you're transferring over a 100mbit switch, if you have a SINGLE write per minute, your backup is going to restart. Reading is one thing and shouldn't restart the backup, but any writes will cause the backup to start from byte #1 and repeat the process. On Wed, Dec 10, 2014 at 11:58 AM, Greg Janée <[email protected]> wrote: > Hi, I'm using the following code to backup a 300MB database nightly: > > #!/bin/bash > sqlite3 {dbfile} <<EOF > .timeout 600000 > .backup {backupfile} > EOF > > This works most of the time, but about one out of 10 attempts results in a > "database is locked" error. When I look at the server logs, I see only > very light activity at the time the backup was attempted--- maybe only one > transaction every 5-10s for example. And these transactions are all very > short, in the millisecond range. Shouldn't a 10min timeout be way more > than sufficient for the backup to succeed? > > Thanks, > -Greg > > _______________________________________________ > sqlite-users mailing list > [email protected] > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list [email protected] http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

