Why not just bloock on the transation. That will queue waiting threads,
serializing the access to the transaction.
Ken wrote:
My assumption for the server thread was that it needed to process all incoming
requests in transaction order and to not loose outstanding requests. You have
two choices once a client initiates a transaction:
a. reject the incoming request since a transaction is active in the server. The client would then be able to re-submit the request... This seemed to have alot of overhead since the client would then need to have code to resubmit in the event of a Reject.
And then it would simply sit in a loop re-posting the message until it got a valid acknowledgment...
b. re-Queue the request to the tail, causing the client to block waiting for a response from the server. The active client will eventually complete its transaction and the next in the queue will be serviced.
I favored option b. Since it caused less thrashing about when the client intiated a read request or another transaction request when a transaction was already in progress.
Hope that helps.
John Stanton <[EMAIL PROTECTED]> wrote: Ken wrote:
Richard,
You might want to look at src/test_server.c for an example of the shared_cache
if you haven't found it already.
Personally, I think it makes a lot of sense (read simpler) to implement
independent connections than to implement a server. But I can see why you might
want a server if you have many threads and memory constraints.
The server still can only have one transaction running at a time, even though the cache is shared. However, it can run multiple select operations and perform dirty reads(when enabled).
The biggest difficulty encountered with the server is how to handle client
requests when a transaction was in progress... Do you re-queue or just fail and
have the client resend? My solution was to keep a state of a client thread id
when it started a transaction. If the server thread encountered a message that
was not from the client thread that started the transaction it moved the
message to the end of the queue.
Why not just block on the transaction?
Your welcome to call email me directly if you need more info or call if you'd
like to discuss my experiences with the server/thread approach.
Regards,
Ken
Richard Klein wrote:
Richard Klein wrote:
[EMAIL PROTECTED] wrote:
John Stanton wrote:
Yes, each connection has a cache. A lot of concurrent connections
means a lot of memory allocated to cache and potentially a lot of
duplicated cached items. See shared cache mode for relief.
Yes. But remember that shared cache mode has limitations:
* When shared cache mode is enabled, you cannot use
a connection in a thread other than the thread in which
it was originally created.
* Only connections opened in the same thread share a cache.
The shared cache mode is designed for building a "server thread"
that accepts connection requests and SQL statements via messages
from "client threads", acts upon those requests, and returns the
result.
--
D. Richard Hipp
I suppose that I could accomplish almost the same thing in 2.8.17,
even though shared cache mode is not available in that version.
I could have a server thread that opens the database, and then
accepts and processes SQL statements via messages from client
threads.
The only difference would be that the client threads could not
send connection requests. There would be only one connection,
and it would be opened implicitly by the server thread at system
startup.
The benefit would be that all the client threads would effectively
share the same cache, since there would in fact be only one connection.
The cost would be that each SQL statement would require an additional
two context switches to execute.
In my application (TiVo-like Personal Video Recorder functionality
in a set-top box), the benefit of memory savings far outweighs the
cost of a performance hit due to extra context switches.
- Richard
Upon further reflection, I realized that the scheme outlined above
won't work.
The problem can be summed up on one word: TRANSACTIONS. There needs
to be a way to make sure that the SQL statements composing a trans-
action in client thread 'A' aren't intermixed with those composing a
transaction in client thread 'B'.
The SQLite connection is the structure designed to keep track of state
information such as whether or not a transaction is in progress. If
client threads 'A' and 'B' share the same connection, then the burden
of maintaining this state information falls on the server thread. Not
a great idea.
Therefore, it would appear that I have two options:
(1) Have the server thread open separate connections for client threads
'A' and 'B', and enable shared cache mode so that the two connections
can share cached items. This option requires upgrading to SQLite version
3.3.0 or higher.
(2) Abandon the idea of a server thread; have threads 'A' and 'B' open
their own connections and access SQLite directly. This option does *not*
allow the sharing of cached items, but allows me to stay with SQLite
version 2.8.17.
- Richard
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------