Regarding the Journal:
 
 I was thinking that this would be useful in the context of a single process 
multiple threads and shared cache. All that would be required is an additional 
thread to handle the logging. 
 

Christian Smith <[EMAIL PROTECTED]> wrote: Ken uttered:

> Would it be possible to implement a Page level locking system using 
> sqlite?

Deadlock detection/recovery and/or prevention is much more complex for 
page level locking. As there is no shared data between SQLite clients 
other than the database file itself, tracking locks in other clients is 
impossible beyond the OS level locking.

A single process with multiple threads could feasibly track page level 
locking, but it would be intra-process only, and would complicate the 
locking significantly for relatively little benefit.

Note that a writer can already operate concurrently with readers until it 
fills it's cache, and only has to have exclusive use of the database when 
synching it's cache on spill or commit.

>
> I seem to recall a commercial product Ingress that utilized a page level 
> locking scheme that would escalte to a table level lock when over a 
> certain % of a tables pages were locked.


If you want multiple concurrent writers, use PostgreSQL instead. It uses 
MVCC to avoid even row level locks. And it has an embeddded SQL 
pre-processor to boot.


>
> This obviously would be very beneficial for those of us who are 
> implementing threaded applications using sqlite. The shared cache 
> concept would need to be extended so that a global context for the 
> shared cache would be maintained, allowing each individual thread that 
> connects (sqlite3_open) to utilize the shared cache for that DB.
>
> The benefit would be to move the Locking from the DB level down to the 
> page level. Keep a list of the before image pages in memory so that 
> readers are not blocked by the writers (lock holders)  of the pages.
>
> When the writer txn completes, Mark it txn as completed in the journal.
>
> I think it would be beneficial to visit the journal design as well for 
> this. Mainly don't open/close journals, since a journal is always part 
> of the DB pre-build the journal and allow multiple threads of writers 
> into the journal. This should also help with some of the areas inside 
> the pager/journal code where a random checksum is used to validate the 
> journal upon crash recovery.
>
> Just some ideas to improve concurrency that I had and wanted to jot 
> down.


The current journal implementation is simple and fast. Using a single 
rollback journal, rather than a redo-undo journal minimizes the amount of 
data that has to be written, and the moves the needs for asynchronous 
commits that would be required for any other journal design. Consider the 
alternatives for a minute:

- Current journal design:
   Pros: simple, fast for single writer big transactions.
   Cons: Single writer only. Writer block readers. D in ACID requires
         multiple file syncs.

- Write Ahead Redo-Undo log:
   Pros: Potentially multiple writers, ACID with single file sync.
   Cons: Difficult to coordinate between multiple processes. Requires
         async process to write log entries to database file.

- Write Ahead Redo log:
   Pros: Simpler than Redo-Undo log. ACID with single file sync.
   Cons: No better write concurrency than current journal. Still requires
         async process to write log entries to database file.

>
> DRH, just reviewd the tokenize.c and pager.c, I concurr that Yacc/bison 
> have the token passing incorrect.  Your implementation of tokenize is 
> very interesting, what an excelent technique to embedd the tokens using 
> overlapping strings.
>
> Can you send me an email address, I have some code that you might find 
> intersting to utilze in the pager.c for the checksums.


Just post it to the list. Perhaps use the ticket tracker and attach your 
patch(es) to a ticket.

Christian


--
     /"\
     \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
      X                           - AGAINST MS ATTACHMENTS
     / \

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


Reply via email to