On Sat, 31 Jan 2015 00:21:18 -0500, Godmar Back <god...@gmail.com> wrote:

>I have a single SQLite 2-column table with a primary TEXT key and a value,
>like so (id TEXT PRIMARY KEY, value TEXT). One process adds new records
>periodically, perhaps 1-10 per minute. The database currently has 850,000
>entries and is 7.3GB large on disk.

You could benchmark an alternative structure with 

>I also need to perform bulk queries, which appear to be very slow. On an
>average desktop PC, for instance, a "SELECT COUNT(*)" takes over 5 minutes.

count(*) does a full table scan, which takes time. You could add
ON INSERT/ON DELETE triggers to maintain a running count in a separate table.

>If I want to do a dump of the table, as in "SELECT value FROM ...." I'll
>quickly get "database is locked" errors. Googling revealed that those are
>because a long running select keeps a cursor, and thus readlock on the
>entire database.
> I have since rewritten the query using multiple SELECT *
>FROM ... LIMIT a, b where b = 50 and a = 0, 50, 100, 150, .....  However,
>it takes 20 hours to fully extract the table's 850,000 records, with only
>minimal per record processing.

With the LIMIT clause SELECT will usually be slower, especially when ORDER BY
is something else than the primary key.

>My question: is this performance expected, or am I doing something wrong?
>Is there a quick way to count and/or dump an approximate snapshot of a
>single 2-column table such as the one shown above, even while keeping the
>database available for concurrent writes?

You probably should use PRAGMA journal_mode=WAL; , which allows one writer
and many readers concurrently.

Also, have a look at 

>I'm using Ubuntu 12.10 with 3.7.9-2ubuntu1.1 libsqlite3 using the Python

Check whether that the version reported by SELECT sqlite_version(); 
has the features I mentioned above (WITHOUT ROWID and WAL mode);
If necessary, compile your own library from source,
http://www.sqlite.org/download.html (bottom)

Hope this helps.


Kees Nuyt

sqlite-users mailing list

Reply via email to