On Mon, 11 Apr 2005, Thomas Steffen wrote:

>I have a problem where I need both a high throughput (10%
>write/delete, 90% read) and durability. My transactions are really
>simple, usually just a single write, delete or read, but it is
>essential that I know when a transaction is commited to disk, so that
>it would be durable after a crash.
>
>I can see that sqlite does an fsync() after each COMMIT, so a naive
>implementation give *very* bad performance. I could severeal
>operations into one transaction, reducing the amout of time waiting
>for fsync() to finish, but I am not sure whether that is the most
>efficient solution. Is it possible to delay the fsync(), so that it
>only occurs after 10 or 100 transactions?


No.


>
>The reason I ask is that I certainly don't want to roll back, if one
>operation fails, because the operations are basically independent of
>each other. And it may be more efficient if the transaction size stays
>small.
>
>Ideas?


How about batch operations, so that if you get an error, you rollback the
batch update, do only redo the updates that succeed up to that point, then
handle the failed update in it's own transaction. So long as you do the
updates in order, you should have a consistent view at all times.


>
>And is there a way to automatically replicate the database to a second system?


No. You would have to implement replication yourself using triggers maybe,
or perhaps update the pager layer to synchronise database contents to a
second file. But you'll be on your own.

What would the replica be used for? Does it need to be up to date at all
times?


>
>Thomas
>

Christian

-- 
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
     X                           - AGAINST MS ATTACHMENTS
    / \

Reply via email to