G'day,




"D. Richard Hipp" <[EMAIL PROTECTED]>
31/03/2004 01:21 PM

 
        To:     [EMAIL PROTECTED]
        cc: 
        Subject:        Re: [sqlite] Concurrency Proposal


> I think the gist of Ben's proposal is as follows
> (please correct me if I am wrong):
>     Writes do not modify the main database file until
>     they are ready to commit - meaning that reader can
>     continue to read from the file during lengthy
>     transactions.  The journal contains modified pages,
>     not the original pages, and is thus a roll-forward
>     journal rather than a roll-back journal.

I think it's worth my posting a suggestion from a co-worker of mine who 
may be known to some if only by his surname :) He posed the obvious 
question as to why the transaction is so long in the first place. My 
personal answer was that I have a transaction I keep open for a second at 
a time, in case more change come through. That way I get maximum 
throughput while retaining the consistency guarantee that journaling 
provides.

His alternative proposal for my situation is simple: Buffer the changes, 
instead of holding a transaction open. This is something my code could do 
fairly easily and I'm a bit disappointed I didn't think of it :) If I ever 
get around to changing the code, I'll have it keep a fixed-size buffer of 
changes. Whenever the buffer fills, or one second passes since the first 
buffer entry was inserted, I'll flush the buffer with a transaction.

Maybe long write transactions concurrent with readers is a requirement for 
sqlite, but I'm not so sure it's my requirement anymore. Perhaps this 
simple suggestion will make it a requirement for fewer current sqlite 
mailing list users, too ;)

Benjamin.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to