How about this instead.

   Read your records, parse and format into some known format by your 
application. Write the data to disk in a file.
  Then put a single entry into a sqlite table. specifing the on disk file name.

Sqlite may only have one write operation running concurrently. There are no 
concurrent writes. You can have of course concurrent reads.

does that help?
Ken




James Gregurich <[EMAIL PROTECTED]> wrote: 
On Apr 18, 2008, at 2:33 :32PM, Dennis Cote wrote:
>
> To share an attached database the threads must be able to name it, and
> this is only possible with a file database.

you could change the open() function to be able to assign a name to an  
in-memory db and then keep a mapping of all the names internally. You  
could also provide an API call that takes an existing connection to an  
in-memory store and attaches its db to another pre-existing db on  
another connection. Seems like the underlying foundation is already  
there to do it. But, I admit, I have no knowledge of the  
implementation details of SQLite.



>
> Perhaps you can replace the proprietary file format with a permanent
> SQLite database file (and then again maybe not).

We don't control those formats. they are controlled by certain large,  
well-known software companies. we just reverse-engineer their formats.


> You could implement a server thread that accesses a single memory
> database which accepts commands from, and passes the results back to,
> your other threads as John suggested. You will have to provide some  
> form
> of resource management for the shared resource, whether it is a shared
> memory database, file, or something else.

unless I misunderstand the way the SQLite API works, that isn't really  
practical.

my task is to read a chunk of data, parse it and insert a record into  
table ( a number of records in a loop ofcourse). To do that, I have to  
prepare a statement and then bind data values to the to the statement  
in a loop.

Once I begin the transaction and prepare the statement, the entire db  
is locked up for the duration of the bulk insert.  If that is true,  
then I'll lose all opportunity for parallelism.

If I have to write my own temporary storage containers to hold data  
while it waits to be committed by a datastore thread, then I might as  
well just write my own containers and be done with the task rather  
than going to the expense of using a SQL data store.

One reason to use SQLite is that it would take care of the  
synchronization of multiple writers and readers for me. If I have to  
write all that myself, then why bother with SQLite?


On of my options is to use CoreData on the macintosh. That will do  
what I want as it caches record inserts and does one big commit....and  
it handles the synchronization.  However, what I do do with the  
lovable Windows platform?

oh well. I"ll figure it all out some how.





_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to