On Tue, 6 Sep 2011 03:17:23 -0400, Frank Chang
<frank_chan...@hotmail.com> wrote:

> Kees Nuyt, I studied your producer-consumer 
> multithreading further with respect to our proposed 
> application. If the speed of MySQL reads is much faster 
> than the the Speed of SQLite writes, then in the 
> producer-consumer model , the consumer will spend a lot 
> of time waiting for the SQLite write to complete. In our 
> proposed application, each SQL write to the master 
> database table actually writes to 4 other SQLite tables 
> using SQLite User Defined Functions. So that is why the 
> overlapping MySQL reads would be much faster than the 
> SQLite Writes. Is it possible to modify the 
> producer-consumer multithreading model to account for the 
> fact that the SQL writes are much longer than the MySQL 
> reads?

No. The advantage you have by using this model is that the
overlapping MySQL reads do not cost any time, so the speed
is only determined by the SQLite writes.

> In terms of other SQLite Optimizations, during the MySQL 
> to SQLite conversion process, is it possible to deactive 
> the SQLIie mutexes and SQLite locks since we will be 
> writing to the SQLite database tables only and there will 
> no concurrent SQlite readers during the MySQL to SQLite 
> conversion.

No, the mutexes and locks are there for a reason.
There is no workaround when using the standard SQLite library.
If you really need parallel writes you would have to develop a
new library using the same database file format. If that was
really necessary and relatively easy, it would have been done
already.

You could try to compile without multithreading / threadsafety and
multithread anyway, but that will most probably fail (in most
mysterious ways), and nobody will be able to help you.

If this is very important for you, you could contemplate to pay
the SQLite development team to develop such a library, but it will
probably be expensive and take a lot of time.
http://www.hwaci.com/sw/sqlite/prosupport.html


> If that is not possible, could you briefly 
> describe the other SQLite optimization which you 
> mentioned your previous post? Thank you.

Here are a few options:

1) During the conversion, put the database in a very
   fast filesystem, like Solaris tempfs (memory) or
   a RAMdisk.
   If that is not possible, build an in-memory database
   and transfer it to a disk file using the backup API.

2) PRAGMA page_size={the optimal database page size for your
filesystem and average row size};

3) PRAGMA cache_size={many pages};
   PRAGMA default_cache_size={many pages};

4) PRAGMA jounal_mode=OFF; -- note: use with caution.

5) PRAGMA synchronous=OFF;

6) PRAGMA temp_store=MEMORY;

7) PRAGMA foreign_keys=OFF;

9) Sort INSERT statements in the order of the key of the most
complicated index.

10) Postpone CREATE INDEX statements until after all tables are
loaded.

Once the database is complete:
11) VACUUM;
12) ANALYZE;

See also:
http://www.sqlite.org/pragma.html
http://www.sqlite.org/sharedcache.html
-- 
  (  Kees Nuyt
  )
c[_]
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to