> Long story short - I suspect the open-call, to be the "blocker"
> in your "SharedCache-Mode=Off"-scenario.

If database file is pretty large, query reads a lot of data while
executing and all data read fit into database cache configured then I
think I/O will be the main difference between "with-shared-cache" and
"without-shared-cache" scenarios.

But how did you understand when queries performed more in "serial" way
and when more in "parallel" way? I've re-read all your messages and
didn't find how you did that. And what's the running time when you
tried to run queries in several processes?

And one more probably the main question: what threading library are
you using? Is it kernel-space or user-space threads? Maybe this
library serializes execution of your threads when processes can be
parallelized over CPU cores.


Pavel

On Fri, Mar 5, 2010 at 2:09 PM, Olaf Schmidt <s...@online.de> wrote:
>
> "Luke Evans" <luk...@me.com> schrieb im Newsbeitrag
> news:5d6df6e4-c817-4788-a2a2-87dc5e32f...@me.com...
>> Thanks very much for your notes Olaf.
>>
>> I've done as you suggest... I already had SQLITE_THREADSAFE=2
>> defined, but didn't have the SQLITE_OMIT_SHARED_CACHE
>> asserted (assuming private cache to be the default).
>
> Ok, don't know the (compile-)default of SQLites
> SharedCache-mode - this was only to make sure,
> nothing went wrong with regards to your ...sqlite3_open_v2() call.
>
> And BTW (since you asked that below) ... with the term:
> "dynamic access-mode-switches" I meant these switches
> in the open-call - which you mentioned in your first post:
>
>  "despite having SQLITE_CONFIG_MULTITHREAD set,
>   SQLLITE_CONFIG_MEMSTATUS off, with
>   SQLITE_OPEN_SHAREDCACHE and SQLITE_OPEN_NOMUTEX
>   used on open."
>
> Reading your posted code-snippet (good you've included that) -
> it seems, you're handling such a "threaded read-request"
> *including* the open-call in each these "ReadOut-QueryActions".
>
> How does the whole thing perform, if you throw out
> the Open-call from your threaded request-handling?
>
> For that you could change your threading-model to a "longer-
> living one", spanning the workerthreads with a "fixed thread-
> pool-count", each thread then opening your DB on startup.
> And then let all these worker-threads in the pool enter an
> efficient wait-loop, waiting for a "process-request" message...
> (Not sure about the threading-model on OS-X - I'm more
>  a Windows-guy).
>
> Then (in case of an incoming query on your main-thread)
> you could perform a loop on your central "threadpool-
> handler-object" (which knows all its spanned worker-threads)
> for a "free-slot" (a shared memory-area), then place the
> SQL-string there and inform the found worker-thread
> with a "Messaging-Event-mechanism of your choice",
> that it has to process the query and place your resultset-object
> in its thread-slot.
>
> Long story short - I suspect the open-call, to be the "blocker"
> in your "SharedCache-Mode=Off"-scenario.
>
> Would be interested, how the whole thing performs (and
> compares to your 4.5 seconds in SharedCache-mode),
> if you ensure the DBHandle-values beforehand within your
> threads (soon after thread-startup) - and then perform only
> the "statement- and step-actions" in your threaded timings -
> and maybe try the performance-test more than once in your
> test-session, to get also the timings for "hot SQLite-caches"
> (on each of your separate "Thread-DBHandles").
>
> Olaf
>
>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to