Re: [sqlite] delayed (batch) transactions

2006-12-26 Thread Nicolas Williams
On Tue, Dec 26, 2006 at 09:36:42AM -0800, Ken wrote:
> > Your question boils down to this:  Can you speed up transactions
> > by dropping the durable property - the D in ACID.  Yes you
> > can.  Actually, most client/server database engines already
> > do this for you without telling you. Very few client/server
> > databases are really ACID - they are usually on ACI when
> > confronted with a power failure.
> 
> Are you sure about this? I'm pretty sure PostgreSQL is ACID by default, 
> and as certain as I can be without proof that the commercial (Oracle etc.) 
> are as ACID as the hardware they run on.
> 
> They achieve this by not requiring seeks to overwrite data, as the write 
> ahead log is sequential and so can be written at the sequential IO rate of 
> the underlying device in the best case.

Indeed, and it might be that SQLite runs significantly faster on
filesystems that work that way as well.  Even then there's some sort of
super-block that has to be updated for a transaction to be durably
committed, but one can spread alternats about such that as soon as one
is written then the transaction is committed (on recovery you have to
find a thread of superblocks, but it's a small trade-off).

Nico
-- 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-26 Thread Ken
Regarding Oracle:
 
 It also batches commits... This means that if two or more users submit commits 
concurrently (or near the same interupt ) then those will be serviced at the 
same time. 
 
 But oracle differs significantly from sqlite, in its architecture as it would 
not be appropriate for an embedded DB.. 

Christian Smith <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] uttered:

>
> It takes at least two complete rotations of the disk platter
> to do an atomic and durable commit.  On a 7200 RPM disk, that
> means 60 transactions per second is your speed of light.
>
> Your question boils down to this:  Can you speed up transactions
> by dropping the durable property - the D in ACID.  Yes you
> can.  Actually, most client/server database engines already
> do this for you without telling you. Very few client/server
> databases are really ACID - they are usually on ACI when
> confronted with a power failure.


Are you sure about this? I'm pretty sure PostgreSQL is ACID by default, 
and as certain as I can be without proof that the commercial (Oracle etc.) 
are as ACID as the hardware they run on.

They achieve this by not requiring seeks to overwrite data, as the write 
ahead log is sequential and so can be written at the sequential IO rate of 
the underlying device in the best case.

The client/server databases are also not limited by the filesystem 
synchronous metadata updates, and so are more likely to achieve the peak 
IO rate of the underlying device.

>
> [snip]
>
> A second approach would be to overload the OS drivers on
> the backend of SQLite to support asynchronous I/O.  Mozilla
> does this in Firefox in order to boost performance on NFS.
> There is well-commented sample code showing how to do this
> in the SQLite source file "test_async.c".  That sample code
> does not combine multiple transactions, but you could probably
> tweak it to make that happen.


When doing some performance testing between different FS on Linux, I found 
that Ext3 with "data=journal" option was about 50% faster than the next 
fastest options (XFS and ext3 with "data=writeback"). Again, this is 
because synchronous meta-data, as well as normal synchronous file data, 
can be written sequentially in the FS journal at peak IO speed. If I can 
find the figures, I'll post them. However, I mislaid the test results and 
haven't got a suitable machine to reproduce at the moment.


> --
> D. Richard Hipp  
>

Christian

PS. If people are after maximum performance, I can recommend ext3 with
 "data=journal" option for SQLite. The performance is stunning!

PPS. Merry Christmas to those so inclined.

--
 /"\
 \ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
  X   - AGAINST MS ATTACHMENTS
 / \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] delayed (batch) transactions

2006-12-24 Thread Christian Smith

[EMAIL PROTECTED] uttered:



It takes at least two complete rotations of the disk platter
to do an atomic and durable commit.  On a 7200 RPM disk, that
means 60 transactions per second is your speed of light.

Your question boils down to this:  Can you speed up transactions
by dropping the durable property - the D in ACID.  Yes you
can.  Actually, most client/server database engines already
do this for you without telling you. Very few client/server
databases are really ACID - they are usually on ACI when
confronted with a power failure.



Are you sure about this? I'm pretty sure PostgreSQL is ACID by default, 
and as certain as I can be without proof that the commercial (Oracle etc.) 
are as ACID as the hardware they run on.


They achieve this by not requiring seeks to overwrite data, as the write 
ahead log is sequential and so can be written at the sequential IO rate of 
the underlying device in the best case.


The client/server databases are also not limited by the filesystem 
synchronous metadata updates, and so are more likely to achieve the peak 
IO rate of the underlying device.




[snip]

A second approach would be to overload the OS drivers on
the backend of SQLite to support asynchronous I/O.  Mozilla
does this in Firefox in order to boost performance on NFS.
There is well-commented sample code showing how to do this
in the SQLite source file "test_async.c".  That sample code
does not combine multiple transactions, but you could probably
tweak it to make that happen.



When doing some performance testing between different FS on Linux, I found 
that Ext3 with "data=journal" option was about 50% faster than the next 
fastest options (XFS and ext3 with "data=writeback"). Again, this is 
because synchronous meta-data, as well as normal synchronous file data, 
can be written sequentially in the FS journal at peak IO speed. If I can 
find the figures, I'll post them. However, I mislaid the test results and 
haven't got a suitable machine to reproduce at the moment.




--
D. Richard Hipp  <[EMAIL PROTECTED]>



Christian

PS. If people are after maximum performance, I can recommend ext3 with
"data=journal" option for SQLite. The performance is stunning!

PPS. Merry Christmas to those so inclined.

--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-20 Thread Dennis Cote

Laszlo Elteto wrote:

Nested transactions would solve my problem - but only if it worked across
different connections. As I said there are many transactions from various
clients, they may use multiple connections (eg. on a server). I think nested
transactions would still be for ONE particular connection only...

  

Laszlo,

SQLite can only have one client holding a write lock on the database, so 
you can only ever have one client with an open transaction that will 
modify the database.


You can simulate multiple writers if you funnel their requests through a 
single connection. This connection can batch its writes based on time as 
I suggested before.


If you really need multiple writers you should probably look at a 
different database engine like PostgreSQL.


Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-20 Thread Nicolas Williams
On Wed, Dec 20, 2006 at 01:22:06PM -0500, Laszlo Elteto wrote:
> Nested transactions would solve my problem - but only if it worked across
> different connections. As I said there are many transactions from various
> clients, they may use multiple connections (eg. on a server). I think nested
> transactions would still be for ONE particular connection only...

So funnel everything through one connection.

But SQLite doesn't have nested transaction support.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-20 Thread Laszlo Elteto
Nested transactions would solve my problem - but only if it worked across
different connections. As I said there are many transactions from various
clients, they may use multiple connections (eg. on a server). I think nested
transactions would still be for ONE particular connection only...

Laszlo Elteto

-Original Message-
From: Nicolas Williams [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, December 20, 2006 9:43 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

On Wed, Dec 20, 2006 at 02:01:12AM +, [EMAIL PROTECTED] wrote:
> Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> > I DO need Durability, so I don't want to drop that. In fact, I need 
> > and want normal transactional updates - just not immediately flushed to
disk.
> 
> If the information is not flushed to disk, how can it be durable?

I wonder if what Laszlo wants isn't nested transactions.

Nico
-- 


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-20 Thread John Stanton

It is the same problem as trying to get a quart out of a pint bottle.

People looking for durability and higher performance might find that 
using 15,000 RPM disks will do it without affecting durability or 
requiring an application redesign.  Experimentation with multiple disk 
spindles so that Sqlite always has a disk head positioned on its data 
file also has some possibility of improving performance.


[EMAIL PROTECTED] wrote:

Laszlo Elteto <[EMAIL PROTECTED]> wrote:


I DO need Durability, so I don't want to drop that. In fact, I need and want
normal transactional updates - just not immediately flushed to disk. 



If the information is not flushed to disk, how can it be durable?

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-20 Thread Nicolas Williams
On Wed, Dec 20, 2006 at 02:01:12AM +, [EMAIL PROTECTED] wrote:
> Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> > I DO need Durability, so I don't want to drop that. In fact, I need and want
> > normal transactional updates - just not immediately flushed to disk. 
> 
> If the information is not flushed to disk, how can it be durable?

I wonder if what Laszlo wants isn't nested transactions.

Nico
-- 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-20 Thread Laszlo Elteto
That may not be good enough. I actually do need the transactions within a
few seconds (for consistency and availablility for other clients) - It's
just that a few seconds of transaction loss is not a problem (on restart the
system would reset to a good state).

It seems nobody has ever done this type of code modification on SQLite :(
Probably I will need to do it myself...

Laszlo Elteto 

-Original Message-
From: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 9:02 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

On 12/19/06, Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> For this particular application it would NOT be a problem to lose like 
> 2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
> to "hold off" the transactions, ACCUMMULATE them until a certain time 
> (or if cache memory is exhausted - which is not yet the case as we 
> have a modest database), then make a BIG COMMIT (ie. all previous 
> transactions committed or none). That way it's still transactional 
> (ie. no currupted database - I really don't want to use sync = OFF) 
> but the I/O performance wouldnt slow down serving requests.

Have you considered a data warehouse sort of setup?
Write your data to a small cache database that's later uploaded to the
larger 'big' database.

--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Jay Sprenkle

On 12/19/06, Laszlo Elteto <[EMAIL PROTECTED]> wrote:

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.


Have you considered a data warehouse sort of setup?
Write your data to a small cache database that's later uploaded to the larger
'big' database.

--
The PixAddixImage Collector suite:
http://groups-beta.google.com/group/pixaddix

SqliteImporter and SqliteReplicator: Command line utilities for Sqlite
http://www.reddawn.net/~jsprenkl/Sqlite

Cthulhu Bucks!
http://www.cthulhubucks.com

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread drh
Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I DO need Durability, so I don't want to drop that. In fact, I need and want
> normal transactional updates - just not immediately flushed to disk. 

If the information is not flushed to disk, how can it be durable?

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
Shared cache won't help as my problem is the file flush operations at each
COMMIT, not the reading part. My original test was done in a single-threaded
program and it clearly shows the timing issue is with FileFlushBuffers. 

-Original Message-
From: Kees Nuyt [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 12:08 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

On Tue, 19 Dec 2006 13:52:19 -0500, you wrote:

>I've started to use SQLite and it works fine - except for performance. 
>The application gets requests (possibly from many users) and does a few 
>transactions on the database. (eg. create a context for the request; 
>later it may be updated and when the user releases the record is 
>deleted.)
>
>I tried all three sync methods and with FULL, NORMAL and OFF. For 100 
>transactions (request / check / release) I see 800 open/close (in 
>1200msec), 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 
>(NORMAL - 12,000 msec) or 0 (OFF) file flush operations. The latter is 
>really slow but I understand it. (Have to wait for the actual disk
operation to complete).
>
>For this particular application it would NOT be a problem to lose like 
>2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
>to "hold off" the transactions, ACCUMMULATE them until a certain time 
>(or if cache memory is exhausted - which is not yet the case as we have 
>a modest database), then make a BIG COMMIT (ie. all previous 
>transactions committed or none). That way it's still transactional (ie. 
>no currupted database - I really don't want to use sync = OFF) but the 
>I/O performance wouldnt slow down serving requests.
>
>Anybody has done that already? If yes, where can I find such modified 
>SQLite source?
>I saw that all file I/O related calls are in two files: pager.c and 
>vdbeaux.c so they are already well isolated and relatively easy to 
>understand.
>
>Any help for such "delayed" transaction method in SQLite?
>(I tried to find any relevant message in the email archives but came up
>empty.)
>
>Thanks,
>
>Laszlo Elteto
>CISSP, System Architect
>SafeNet, Inc.

I wonder if shared cache would help you?
http://www.sqlite.org/sharedcache.html

In general, in a high concurrency environment sqlite might not be a suitable
solution.
http://www.sqlite.org/whentouse.html
--
  (  Kees Nuyt
  )
c[_]


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
The problem with that approach is that SQLite doesn't support nested
transactions. And I do have transactions which sometimes have to be rolled
back. Oh and there are multiple threads involved. I don't think it's really
good to start a transaction in one thread, add a few more from several other
threads, then finish it from yet another thread.

Laszlo Elteto
SafeNet, Inc. 

-Original Message-
From: Dennis Cote [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 11:30 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

Laszlo Elteto wrote:
> For this particular application it would NOT be a problem to lose like 
> 2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
> to "hold off" the transactions, ACCUMMULATE them until a certain time 
> (or if cache memory is exhausted - which is not yet the case as we 
> have a modest database), then make a BIG COMMIT (ie. all previous 
> transactions committed or none). That way it's still transactional 
> (ie. no currupted database - I really don't want to use sync = OFF) 
> but the I/O performance wouldnt slow down serving requests.
>
>   
Laszlo,

You should be able to do this yourself without changing the SQLite source
code.

You can create two functions to wrap the begin and end transaction
operations. You can have your begin function check for an existing
transaction and only open a new one if there isn't one open yet. It records
the start time for the transaction. The close function counts down until all
open transactions are closed. If the current time is more than your limit
after the start time, it actually closes the transaction and flushes the
changes to disk. Pseudo code is below:

begin_transaction
if transaction open
   increment open count
else
   open transaction
   set transaction open to true
   set transaction start time
   set open count to 1

end_transaction
decrement open count
if open count = 0
   if now - transaction start time > 5 seconds
  close transaction
  set transaction open to false

HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
I DO need Durability, so I don't want to drop that. In fact, I need and want
normal transactional updates - just not immediately flushed to disk. I've
looked at the source and know there is no simple compile option for what I
want. My question was more like: Anybody already done this? (So I don't have
to do the work myself IF it's already been done.)

Laszlo Elteto
SafeNet, Inc.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 19, 2006 11:15 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] delayed (batch) transactions

Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I've started to use SQLite and it works fine - except for performance. 
> The application gets requests (possibly from many users) and does a 
> few transactions on the database. (eg. create a context for the 
> request; later it may be updated and when the user releases the record 
> is deleted.)
> 
> I tried all three sync methods and with FULL, NORMAL and OFF. For 100 
> transactions (request / check / release) I see 800 open/close (in 
> 1200msec), 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 
> 400 (NORMAL - 12,000 msec) or 0 (OFF) file flush operations. The 
> latter is really slow but I understand it. (Have to wait for the actual
disk operation to complete).
> 
> For this particular application it would NOT be a problem to lose like 
> 2-5 seconds of transactions. I wonder if it is possible to tell SQLite 
> to "hold off" the transactions, ACCUMMULATE them until a certain time 
> (or if cache memory is exhausted - which is not yet the case as we 
> have a modest database), then make a BIG COMMIT (ie. all previous 
> transactions committed or none). That way it's still transactional 
> (ie. no currupted database - I really don't want to use sync = OFF) 
> but the I/O performance wouldnt slow down serving requests.
> 

It takes at least two complete rotations of the disk platter to do an atomic
and durable commit.  On a 7200 RPM disk, that means 60 transactions per
second is your speed of light.

Your question boils down to this:  Can you speed up transactions by dropping
the durable property - the D in ACID.  Yes you can.  Actually, most
client/server database engines already do this for you without telling you.
Very few client/server databases are really ACID - they are usually on ACI
when confronted with a power failure.

There is no simple pragma setting or anything like that to drop durability
from SQLite simply because there is no server process hanging around to make
sure that transactions get committed atomically in the background.  You have
to do the background commits yourself.  There are various ways to do this.

One approach would be to write your changes to one or more TEMP tables.
Writes to TEMP tables are always done with synchronous=OFF since TEMP tables
do not need to survive a power loss.  So writes to TEMP tables are fast.
Then have your application periodically transfer the information in TEMP
tables over to the main database.

A second approach would be to overload the OS drivers on the backend of
SQLite to support asynchronous I/O.  Mozilla does this in Firefox in order
to boost performance on NFS.
There is well-commented sample code showing how to do this in the SQLite
source file "test_async.c".  That sample code does not combine multiple
transactions, but you could probably tweak it to make that happen.
--
D. Richard Hipp  <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]

-

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Kees Nuyt
On Tue, 19 Dec 2006 13:52:19 -0500, you wrote:

>I've started to use SQLite and it works fine - except for performance. The
>application gets requests (possibly from many users) and does a few
>transactions on the database. (eg. create a context for the request; later
>it may be updated and when the user releases the record is deleted.)
>
>I tried all three sync methods and with FULL, NORMAL and OFF. For 100
>transactions (request / check / release) I see 800 open/close (in 1200msec),
>5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
>12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
>I understand it. (Have to wait for the actual disk operation to complete).
>
>For this particular application it would NOT be a problem to lose like 2-5
>seconds of transactions. I wonder if it is possible to tell SQLite to "hold
>off" the transactions, ACCUMMULATE them until a certain time (or if cache
>memory is exhausted - which is not yet the case as we have a modest
>database), then make a BIG COMMIT (ie. all previous transactions committed
>or none). That way it's still transactional (ie. no currupted database - I
>really don't want to use sync = OFF) but the I/O performance wouldnt slow
>down serving requests.
>
>Anybody has done that already? If yes, where can I find such modified SQLite
>source?
>I saw that all file I/O related calls are in two files: pager.c and
>vdbeaux.c so they are already well isolated and relatively easy to
>understand.
>
>Any help for such "delayed" transaction method in SQLite?
>(I tried to find any relevant message in the email archives but came up
>empty.)
>
>Thanks,
>
>Laszlo Elteto
>CISSP, System Architect
>SafeNet, Inc.

I wonder if shared cache would help you?
http://www.sqlite.org/sharedcache.html

In general, in a high concurrency environment sqlite might not
be a suitable solution.
http://www.sqlite.org/whentouse.html
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread Dennis Cote

Laszlo Elteto wrote:

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.

  

Laszlo,

You should be able to do this yourself without changing the SQLite 
source code.


You can create two functions to wrap the begin and end transaction 
operations. You can have your begin function check for an existing 
transaction and only open a new one if there isn't one open yet. It 
records the start time for the transaction. The close function counts 
down until all open transactions are closed. If the current time is more 
than your limit after the start time, it actually closes the transaction 
and flushes the changes to disk. Pseudo code is below:


begin_transaction
   if transaction open
  increment open count
   else
  open transaction
  set transaction open to true
  set transaction start time
  set open count to 1

end_transaction
   decrement open count
   if open count = 0
  if now - transaction start time > 5 seconds
 close transaction
 set transaction open to false

HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] delayed (batch) transactions

2006-12-19 Thread drh
Laszlo Elteto <[EMAIL PROTECTED]> wrote:
> I've started to use SQLite and it works fine - except for performance. The
> application gets requests (possibly from many users) and does a few
> transactions on the database. (eg. create a context for the request; later
> it may be updated and when the user releases the record is deleted.)
> 
> I tried all three sync methods and with FULL, NORMAL and OFF. For 100
> transactions (request / check / release) I see 800 open/close (in 1200msec),
> 5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
> 12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
> I understand it. (Have to wait for the actual disk operation to complete).
> 
> For this particular application it would NOT be a problem to lose like 2-5
> seconds of transactions. I wonder if it is possible to tell SQLite to "hold
> off" the transactions, ACCUMMULATE them until a certain time (or if cache
> memory is exhausted - which is not yet the case as we have a modest
> database), then make a BIG COMMIT (ie. all previous transactions committed
> or none). That way it's still transactional (ie. no currupted database - I
> really don't want to use sync = OFF) but the I/O performance wouldnt slow
> down serving requests.
> 

It takes at least two complete rotations of the disk platter
to do an atomic and durable commit.  On a 7200 RPM disk, that
means 60 transactions per second is your speed of light.

Your question boils down to this:  Can you speed up transactions
by dropping the durable property - the D in ACID.  Yes you
can.  Actually, most client/server database engines already
do this for you without telling you. Very few client/server
databases are really ACID - they are usually on ACI when
confronted with a power failure.

There is no simple pragma setting or anything like that to
drop durability from SQLite simply because there is no 
server process hanging around to make sure that transactions
get committed atomically in the background.  You have to
do the background commits yourself.  There are various ways
to do this.

One approach would be to write your changes to one or more
TEMP tables.  Writes to TEMP tables are always done with
synchronous=OFF since TEMP tables do not need to survive
a power loss.  So writes to TEMP tables are fast.  Then
have your application periodically transfer the information
in TEMP tables over to the main database.

A second approach would be to overload the OS drivers on
the backend of SQLite to support asynchronous I/O.  Mozilla
does this in Firefox in order to boost performance on NFS.
There is well-commented sample code showing how to do this
in the SQLite source file "test_async.c".  That sample code
does not combine multiple transactions, but you could probably
tweak it to make that happen.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] delayed (batch) transactions

2006-12-19 Thread Laszlo Elteto
I've started to use SQLite and it works fine - except for performance. The
application gets requests (possibly from many users) and does a few
transactions on the database. (eg. create a context for the request; later
it may be updated and when the user releases the record is deleted.)

I tried all three sync methods and with FULL, NORMAL and OFF. For 100
transactions (request / check / release) I see 800 open/close (in 1200msec),
5800 lock/unlock (22 msec) and 600 (FULL - 17,000 msec), 400 (NORMAL -
12,000 msec) or 0 (OFF) file flush operations. The latter is really slow but
I understand it. (Have to wait for the actual disk operation to complete).

For this particular application it would NOT be a problem to lose like 2-5
seconds of transactions. I wonder if it is possible to tell SQLite to "hold
off" the transactions, ACCUMMULATE them until a certain time (or if cache
memory is exhausted - which is not yet the case as we have a modest
database), then make a BIG COMMIT (ie. all previous transactions committed
or none). That way it's still transactional (ie. no currupted database - I
really don't want to use sync = OFF) but the I/O performance wouldnt slow
down serving requests.

Anybody has done that already? If yes, where can I find such modified SQLite
source?
I saw that all file I/O related calls are in two files: pager.c and
vdbeaux.c so they are already well isolated and relatively easy to
understand.

Any help for such "delayed" transaction method in SQLite?
(I tried to find any relevant message in the email archives but came up
empty.)

Thanks,

Laszlo Elteto
CISSP, System Architect
SafeNet, Inc.

The information contained in this electronic mail transmission may be
privileged and confidential, and therefore, protected from disclosure. If
you have received this communication in error, please notify us immediately
by replying to this message and deleting it from your computer without
copying or disclosing it.

The information contained in this electronic mail transmission may be 
privileged and confidential, and therefore, protected from disclosure. If you 
have received this communication in error, please notify us immediately by 
replying to this message and deleting it from your computer without copying or 
disclosing it.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-