Kees Nuyt, I studied your producer-consumer multithreading further with respect 
to our proposed application. If the speed of MySQL reads is much faster than 
the the Speed of SQLite writes, then in the producer-consumer model , the 
consumer will spend a lot of time waiting for the SQLite write to complete. 
         In our proposed application, each SQL write to the master database 
table actually writes to 4 other SQLite tables using SQLite User Defined 
Functions. So that is why the overlapping MySQL reads would be much faster than 
the SQLite Writes. Is it possible to modify the producer-consumer 
multithreading model to account for the fact that the SQL writes are much 
longer than the MySQL reads?
 
        In terms of other SQLite Optimizations, during the MySQL to SQLite 
conversion process, is it possible to deactive the SQLIie mutexes and SQLite 
locks since we will be writing to the SQLite database tables only and there 
will no concurrent SQlite readers during the MySQL to SQLite conversion. If 
that is not possible, could you briefly describe the other SQLite optimization 
which you mentioned your previous post? Thank you.

Hi Frank,

On Sat, 3 Sep 2011 02:55:17 -0400, Frank Chang
<frank_chan...@hotmail.com> wrote:

> Good morning,
>
> Is it possible to multithread the conversion of MySQLite 
> database table to the SQLite database? Dr. Richard Hipp 
> said in a post recently that at that time it was not to 
> multithread SQLite database connections to the same 
> database. Dr. Hipp suggested that we use MYSQL to do that.
>
> I was wondering if it was possible to multi thread the 
> initial data load of a process using MySQL and temporary 
> tables using the MySQL C/C++ API . Then, in order to 
> preserve the hundreds of SQLite code in our proximity 
> matcher ,could we write a C++ class which allow us to 
> multithread the conversion of the MySQL database B Trees 
> to SQLite database B trees. I realize it may be 
> impossible to do because SQLite does not allow the 
> multithreading of SQLite database connections to the same 
> database.

You can multithread SQLite, preferably by using one connection per
thread, and may be sharing the cache between them helps a bit in
your situation. All connections have to be in the same process to
allow that. You will have to handle SQLITE_BUSY and SQLITE_LOCKED.

It will not help much, because SQLite only allows one writer at a
time without blocking and the proces will probably be I/O bound
anyway.

However, it does make sense to try to overlap reads from MySQL
with writes to SQLite, so having two threads might have some
effect indeed, but I expect not much more than a producer/consumer
model, where the producer reads MySQL into an in-memory buffer,
and the consumer reads the buffer and writes to SQLite. 
That model is relatively easy to implement using system fifo's.
The mbuffer utility allows you to optimize the size of the buffer.


> However, I realize SQlite has a lot of clever programmers 
> who really understand the Sqlite internals and MYSQL 
> internals and may be able to come up with some kind of 
> workaround to speed up C/C++ MYSQL conversion to SQlite. 

There is no workaround when using the standard SQLite library.
If you need parallel writes you will have to develop a new library
using the same database file format. If that was really necessary
and relatively easy, it would have been done already.

Optimizations using the existing library are possible though.
What did you already try to improve performance?
How many rows per second can you handle now?

> Thank you.

Hope this helps.
-- 
  (  Kees Nuyt
  )

 
 


                                          
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to