Wayne Bradney wrote:
All access in SQLite is serialized. Apologies if I'm missing something 
fundamental here, but that's not what I'm seeing with a file-backed database 
when shared cache is OFF.My test has a single table with 1M rows, and four 
queries that each yield 100K different rows. I run them two ways: 1. All 
queries in a loop on the same thread in the same connection.2. Each query in 
parallel on separate threads, each with its own connection. If all access were 
serialized, I would expect these two tests to take about the same amount of 
time overall, wouldn't I?In fact, with a file-backed database and shared cache 
OFF, the second run takes about 70% less time.With shared cache ON, they're the 
same. As to your second point, I probably should have made it clear that this 
isn't an internal project, it's a software product, and we don't control where 
it runs. I understand what an SSD is and why it's better than a spindle drive, 
but my question wasn't really meant to solicit suggestions for performa
n
c
  e improvements outside the proposal at hand, which was to retire our existing 
home-grown in-memory cache implementation (which is very fast for concurrent 
reads, but is extremely limited in how it can be queried), and replace it with a 
SQL-capable, relational store and still get roughly the same performance. Our 
expectation was that we could achieve this with SQLite, but were surprised by the 
apparent lack of read-concurrency, and wanted to get some input on what our 
options might be in terms of SQLite configuration of memory-backed databases. > 
From: slav...@bigfraud.org

You should look into MDB, which does no locking for read operations. Reads scale perfectly across arbitrarily many CPUs. More info here

http://symas.com/mdb/

and SQLite ported to use MDB as its backend is available here

https://gitorious.org/mdb/sqlightning

Date: Sat, 12 Jan 2013 17:48:56 +0000
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Concurrent read performance


On 12 Jan 2013, at 5:38pm, Wayne Bradney <wayne_brad...@yahoo.com> wrote:

"mode=memory&cache=shared"


1. when shared cache is enabled, all reads are serialized, and

All access in SQLite is serialised.  All transactions require locking the 
entire database.  SQLite is very simple -- 'lite' -- so queries run extremely 
quickly, so you don't normally realise that any locking has taken place.

2. there doesn't seem to be any way to have a memory-backed database that can 
be accessed by multiple connections without using a shared cache,  then I guess 
I MUST use a file-backed database to get concurrent reads, even though I don't 
need the persistence and don't want to take the I/O hit. Am I making any sense? 
Anything I'm missing?

You are putting programming effort into making your code fast, and this is 
costing you (or your employer) programmer time.

For any reasonably useful piece of software, every moment of programmer time invested in proper coding saves eons of user time. Putting programmer effort into making correct code fast is always The Right Thing to Do. Software that delivers the correct answer, late, is still wrong.

--
  -- Howard Chu
  CTO, Symas Corp.           http://www.symas.com
  Director, Highland Sun     http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to