On Mon, Apr 11, 2005 at 03:59:56PM +0200, Thomas Steffen wrote: > I have a problem where I need both a high throughput (10% > write/delete, 90% read) and durability. My transactions are really > simple, usually just a single write, delete or read, but it is > essential that I know when a transaction is commited to disk, so that > it would be durable after a crash.
Why do you want to do this with SQLite, rather than something like PostgreSQL? Sounds like you have both concurrent writers AND concurrent readers, all at the same time, which is going to totally hose your performance on SQLite. Do you have some hard constraint that requires an embedded in-process database library like SQLite, rather than a client-server RDBMS? Even if you MUST have an embeded db, I would still test against PostgreSQL, as that should tell you whether MVCC can solve your problems. Embedded databases that support MVCC and/or other techniques for much better concurrency do exist, you just might have to pay for them. You didn't mention your transaction rate, nor what your application even is, but general purpose RDBMSs are specifically designed to support transaction processing adequately, so unless your transaction rates are truly huge, an RDBMS with MVCC (PostgreSQL, Oracle) would probably work fine for you. I suspect it's not your total transaction load that's a problem, it's simply that SQLite doesn't support the concurrency you need. Of course, if that's the case, one solution would be to add MVCC support to SQLite, as has been discussed on the list in the past. That would be cool. :) -- Andrew Piskorski <[EMAIL PROTECTED]> http://www.piskorski.com/