On 18 Dec 2017, at 12:28pm, Dinu <dinumar...@gmail.com> wrote:

> Actually I realized that the DB page size is 1K. Is this bad? I tried to run
> the pragma query with 1M pages, to amount to the same 1G; there seems to be
> a dramatic improvement in throughput at the beginning of the query,

The problem is not the page size, or the total number of pages.  The problem is 
the total data size (e.g. number of pages times page size).

> but it
> quickly succombs to nonlinear slow-down-to-a-drag nonetheless.
> 
> Richard, leaving alternate strategies aside (DROP, TRUNCATE etc.), what
> could be the reason behing this non-linear delete behavior?

This is not behaviour built into the source code for SQLite.  What you are 
seeing is the result of the size of a cache.  If the changes fit within a 
certain cache size.  Once you’ve bust the cache things slow down.

Which cache is being bust can be harder to figure out.  There are at least 
three involved, and only one of them is under SQLite control.

In this thread we’ve given you several things to check.  The latest seem to be:

1) Change your journal mode from WAL to PERSIST (or vice versa) and see if that 
helps.

2) Try this:

BEGIN IMMEDIATE
    create a temporary table with the same columns as MyTable, but no indexes
    copy the rows you need to keep to the temporary table
    DELETE FROM MyTable
    INSERT INTO MyTable (SELECT * FROM TempTable)
COMMIT

Have you tried these things ?  Did the time taken improve or get worse ?

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to