On 28 Dec 2015, at 9:01am, Valentin Davydov <sqlite-user at soi.spb.ru> wrote:

> As far as I understand, INTEGRITY_CHECK simply iterates over the records 
> (of tables and indices) one by one in some arbitrary order. So, if the 
> database is too big to fit in the available memory (sqlite's own cache, 
> system file cache etc), then each iteration implies a random seek on disk(s),
> or even several ones in some scenarios. So, check of a few terabytes database 
> with some tens billions of records and a dozen of indices would take more 
> than 
> 10^11 disk operations of more than 10 milliseconds each. That is, years.

Well I have a 43 Gigabyte database at work.  I bet it doesn't take more than a 
few hours to check.  But I can't do it from home so it'll have to wait until I 
get back to work next week for me to test that theory.

Hmm note to self: work on a copy, use command-line tool, .timer ON.

Simon.

Reply via email to