>>All access in SQLite is serialized. Apologies if I'm missing something 
>>fundamental here, but that's not what I'm seeing with a file-backed database 
>>when shared cache is OFF.My test has a single table with 1M rows, and four 
>>queries that each yield 100K different rows. I run them two ways: 1. All 
>>queries in a loop on the same thread in the same connection.2. Each query in 
>>parallel on separate threads, each with its own connection. If all access 
>>were serialized, I would expect these two tests to take about the same amount 
>>of time overall, wouldn't I?In fact, with a file-backed database and shared 
>>cache OFF, the second run takes about 70% less time.With shared cache ON, 
>>they're the same. As to your second point, I probably should have made it 
>>clear that this isn't an internal project, it's a software product, and we 
>>don't control where it runs. I understand what an SSD is and why it's better 
>>than a spindle drive, but my question wasn't really meant to solicit 
>>suggestions for performanc
 e improvements outside the proposal at hand, which was to retire our existing 
home-grown in-memory cache implementation (which is very fast for concurrent 
reads, but is extremely limited in how it can be queried), and replace it with 
a SQL-capable, relational store and still get roughly the same performance. Our 
expectation was that we could achieve this with SQLite, but were surprised by 
the apparent lack of read-concurrency, and wanted to get some input on what our 
options might be in terms of SQLite configuration of memory-backed databases. > 
From: slav...@bigfraud.org
> Date: Sat, 12 Jan 2013 17:48:56 +0000
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Concurrent read performance
> 
> 
> On 12 Jan 2013, at 5:38pm, Wayne Bradney <wayne_brad...@yahoo.com> wrote:
> 
> > "mode=memory&cache=shared"
> 
> 
> > 1. when shared cache is enabled, all reads are serialized, and
> 
> All access in SQLite is serialised.  All transactions require locking the 
> entire database.  SQLite is very simple -- 'lite' -- so queries run extremely 
> quickly, so you don't normally realise that any locking has taken place.
> 
> > 2. there doesn't seem to be any way to have a memory-backed database that 
> > can be accessed by multiple connections without using a shared cache,  then 
> > I guess I MUST use a file-backed database to get concurrent reads, even 
> > though I don't need the persistence and don't want to take the I/O hit. Am 
> > I making any sense? Anything I'm missing? 
> 
> You are putting programming effort into making your code fast, and this is 
> costing you (or your employer) programmer time.  Have you tried doing this 
> using an SSD instead of a spinning disk ?  A great deal of the time taken for 
> on-disk SQLite is waiting for the disk to spin to the right place.  With an 
> SSD all this time vanishes and access is nearly as fast as for in-memory 
> databases.  The advantage is that you don't spend your time on clever optimal 
> programming or have to do any of the things required for 'mode=memory'.  In 
> fact it works very quickly without any special modes or PRAGMAs at all.  
> Though I don't know your setup in detail and it may not be of such a great 
> advantage to you.
> 
> Simon.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
                                          
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to