On 12/19/06, Laszlo Elteto <[EMAIL PROTECTED]> wrote:
For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.

Have you considered a data warehouse sort of setup?
Write your data to a small cache database that's later uploaded to the larger
'big' database.

--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to