On Mon, Apr 11, 2005 at 03:59:56PM +0200, Thomas Steffen wrote:
> I have a problem where I need both a high throughput (10%
> write/delete, 90% read) and durability. My transactions are really
> simple, usually just a single write, delete or read, but it is
> essential that I know when a
On Apr 11, 2005 4:17 PM, Christian Smith <[EMAIL PROTECTED]> wrote:
> On Mon, 11 Apr 2005, Thomas Steffen wrote:
> >Is it possible to delay the fsync(), so that it
> >only occurs after 10 or 100 transactions?
>
> No.
Thought so, because the transaction log seems to happen at a low
level, close
On Mon, 11 Apr 2005, Witold Czarnecki wrote:
>rsync could be better.
Neither would do a good job if the database contents change while you're
copying it. There be pain and corruption.
The safest way to take a snapshot is to use the sqlite shell .dump
command, and feed the output of that to
On Mon, 11 Apr 2005, Thomas Steffen wrote:
>I have a problem where I need both a high throughput (10%
>write/delete, 90% read) and durability. My transactions are really
>simple, usually just a single write, delete or read, but it is
>essential that I know when a transaction is commited to disk,
rsync could be better.
Best Regards,
Witold
And is there a way to automatically replicate the database to a second
system?
Copying the database file should give you an exact replica.
On Apr 11, 2005 6:59 AM, Thomas Steffen <[EMAIL PROTECTED]> wrote:
> I have a problem where I need both a high throughput (10%
> write/delete, 90% read) and durability. My transactions are really
> simple, usually just a single write, delete or read, but it is
> essential that I know when a
I have a problem where I need both a high throughput (10%
write/delete, 90% read) and durability. My transactions are really
simple, usually just a single write, delete or read, but it is
essential that I know when a transaction is commited to disk, so that
it would be durable after a crash.
I
7 matches
Mail list logo