sqlite-users,

I've got a multi-threaded daemon that's inserting large batches into a table
that looks like this:

CREATE TABLE list_node (key INTEGER PRIMARY KEY, name TEXT UNIQUE, list
INT);

There's one writer thread, and one reader thread. Each thread would have its
own db object.

The very first time the database is created, the writer will be inserting
50-100 million rows to
synchronize with an upstream database. After that,there will be batches of
changes trickling in
over time.

The initial inserts are lot faster if I group them between a begin
transaction; and commit; call,
but I've found that during a write transaction readers are locked from
reading the database.
I've tried grouping 1000 inserts at a time in between begin/commit so that
readers aren't starved,
but I found that the write performance is extremely diminished. If I set
PRAGMA synchronous=OFF,
it's still pretty slow (1 mil rows in 1m33s)

Database integrity on power off is not a problem for me. If the database is
corrupt, I can
recreate it from the upstream data source.

Any thoughts on tuning?

Does it make sense to give both the reader and writer access to the same
database connection,
synchronized with a mutex? This way, reader could still query the DB mid
transaction.

Thanks!
---erik
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to