[sqlite] Re: database table is locked when trying to delete a record after select

2006-01-08 Thread Igor Tandetnik

Justin Wu wrote:

It will be always get 'database table is locked' message when trying
to delete
a record after select.


You are not deleting after select - you are deleting while still 
performing select. You can't do a write operation in the middle of a 
read. You have to select, remember the rows to be deleted later, reset 
or finalize your select statement, and then perform all deletes.


Igor Tandetnik 



[sqlite] database table is locked when trying to delete a record after select

2006-01-08 Thread Justin Wu
It will be always get 'database table is locked' message when trying to
delete
a record after select.

I'v tested on WinXP and Ubuntu 5.10, but got the same error.

SQLite version v3.2.8

please see the following code

#include 
#include 
#include 

#define DATFILE "test.dat"

int main(int argc, char *argv[]) {
   // sqlite
   sqlite3 *db = 0;
   sqlite3_stmt *stmt = 0;
   char *sql;
   int rc;
   int id = 0;

   // Open the database
   if (SQLITE_OK != sqlite3_open(DATFILE, )) {
   printf("!!! Couldn't open the database - %s\n", DATFILE);
   exit(1);
   }

   sqlite3_busy_timeout(db, 3000);

   if (SQLITE_OK != sqlite3_prepare(db, "SELECT ProxyId, Host, Port FROM
ss_proxy", -1, , 0)) {
   printf("!!! sqlite3_prepare::%s", sqlite3_errmsg(db));
   sqlite3_close(db);
   exit(1);
   }

   //sqlite3_exec(db, "BEGIN;", 0, 0, 0);
   while (SQLITE_ROW == sqlite3_step(stmt)) {
   id = sqlite3_column_int(stmt, 0);
   printf("*** %d. %s:%d\n", sqlite3_column_int(stmt, 0),
sqlite3_column_text(stmt, 1), sqlite3_column_int(stmt, 2));
   // i will do something with every record, currently simple use '(id
==
1742 || id == 1743)' instead of
   if (id == 1742 || id == 1743) {
   //sqlite3_reset(stmt);
   // database table is locked - [rc=6]
   sql = sqlite3_mprintf("DELETE FROM ss_proxy WHERE ProxyId=%d;",
id);
   rc = sqlite3_exec(db, sql, 0, 0, 0);
   if (SQLITE_OK != rc) {
   printf("*** delete failed - %s - [rc=%d]\n", sqlite3_errmsg
(db), rc);
   } else {
   printf("*** %d record deleted!\n", sqlite3_changes(db));
   }
   sqlite3_free(sql);
   }
   };
   //sqlite3_exec(db, "COMMIT;", 0, 0, 0);

   sqlite3_finalize(stmt);
   // Close the database
   sqlite3_close(db);
   return 0;
}


Re: [sqlite] proposal for improving concurrency in SQLite

2006-01-08 Thread John Stanton

Alexander,

I like your general concept.  Some years ago I implemented a somewhat 
similar strategy in a product of ours which was SQLite-like in that it 
linked into each application process and managed B-Trees.  By 
identifying read-only transactions and handling them in a simple manner 
compared to read/write a high degree of concurrency was achieved in 
practice, with thousands of concurrent users achieving adequate 
responses.  As you point out typical applications are read not write 
intensive.


Alexander Kozlovsky wrote:

Hello!

I think, there is another way for high-concurrency SQLite-based systems.

**It is not entirely universal**, but I hope it may be used for
high-traffic web sites and similar kind of systems, where each individual
transaction (such as a page retrieval or a form submission) is very
short and can be replayed automatically in the case of transaction
failure.

The main idea is in additional abstraction layer on the top of
SQLite. This layer is high level transaction management system.
Each high-level optimistic transaction is (almost) fully ACID,
but many high-level transactions executes in the scope of single
low-level SQLite transaction. This way the count of file buffers
flushing is decreased significantly.

Below is very simplified example. Let imagine high-level transaction:

BEGIN
  SELECT ...
  -- potentially slow processing occurred here
  SELECT ...
  -- potentially slow processing occurred here
  INSERT ...
  UPDATE ...
  DELETE ...
COMMIT

This transaction in many cases can be split into two independent
transactions:
1. Read-only transaction
2. Read-write optimistic transaction with conflict detection

BEGIN
  SELECT ...
  -- potentially slow processing occurred here
  SELECT ...
  -- potentially slow processing occurred here
COMMIT

BEGIN
  SELECT ... -- additional short SELECTs for conflict detection
  -- ROLLBACK if some conflict was detected
  INSERT ...
  UPDATE ...
  DELETE ...
COMMIT

Because the first transaction is read-only, it may be performed
in parallel with similar transactions. That is, no locking
conflicts is occurred on this phase.

Because the second transaction contains optimistic conflict
detection, it can be combined with many similar transactions
into one low-level SQLite transaction. This way, the count of
file buffers flushing is reduced greatly. This transaction
will be relatively short because no application-specific slow
processing take place on that phase.

The main trick is how to split the original transaction
automatically. It may be very hard if original transaction
is expressed directly in SQL. I hope, such transformation can be
performed much more easily if operations are expressed in terms
of some object-relational mapping system. Such ORM can begin
read-only SQLite transaction and then delay all the modifications
till the end of high-level transaction. If application code
will try execute direct SQL data modification query, then
the read-only transaction will degrade smoothly to unbatched
read-write transaction.

On typical web-site the majority of all transactions is read-only.

If the web site must perform 1000 read-only transactions per second
and 20 read-write transactions per second, then this 20 read-write
transactions can be slightly delayed and then processed in the
batch as single low-level SQLite transaction.

But what if optimistic locking conflict is detected in one
of this 20 transactions? Does it means the others 19 high-level
transaction must be rolled back, as well as low-level SQLite
transaction? No, it is not. Only transaction with conflicts
will be excluded from low-level SQLite transaction, and then
the entire process of this HTTP request will be reiterated
by the web-server.

In the near future I'll try to implement SQLite-oriented ORM
which will demonstrate this approach.


Best regards,
 Alexandermailto:[EMAIL PROTECTED]





Re: [sqlite] proposal for improving concurrency in SQLite

2006-01-08 Thread Alexander Kozlovsky
Hello!

I think, there is another way for high-concurrency SQLite-based systems.

**It is not entirely universal**, but I hope it may be used for
high-traffic web sites and similar kind of systems, where each individual
transaction (such as a page retrieval or a form submission) is very
short and can be replayed automatically in the case of transaction
failure.

The main idea is in additional abstraction layer on the top of
SQLite. This layer is high level transaction management system.
Each high-level optimistic transaction is (almost) fully ACID,
but many high-level transactions executes in the scope of single
low-level SQLite transaction. This way the count of file buffers
flushing is decreased significantly.

Below is very simplified example. Let imagine high-level transaction:

BEGIN
  SELECT ...
  -- potentially slow processing occurred here
  SELECT ...
  -- potentially slow processing occurred here
  INSERT ...
  UPDATE ...
  DELETE ...
COMMIT

This transaction in many cases can be split into two independent
transactions:
1. Read-only transaction
2. Read-write optimistic transaction with conflict detection

BEGIN
  SELECT ...
  -- potentially slow processing occurred here
  SELECT ...
  -- potentially slow processing occurred here
COMMIT

BEGIN
  SELECT ... -- additional short SELECTs for conflict detection
  -- ROLLBACK if some conflict was detected
  INSERT ...
  UPDATE ...
  DELETE ...
COMMIT

Because the first transaction is read-only, it may be performed
in parallel with similar transactions. That is, no locking
conflicts is occurred on this phase.

Because the second transaction contains optimistic conflict
detection, it can be combined with many similar transactions
into one low-level SQLite transaction. This way, the count of
file buffers flushing is reduced greatly. This transaction
will be relatively short because no application-specific slow
processing take place on that phase.

The main trick is how to split the original transaction
automatically. It may be very hard if original transaction
is expressed directly in SQL. I hope, such transformation can be
performed much more easily if operations are expressed in terms
of some object-relational mapping system. Such ORM can begin
read-only SQLite transaction and then delay all the modifications
till the end of high-level transaction. If application code
will try execute direct SQL data modification query, then
the read-only transaction will degrade smoothly to unbatched
read-write transaction.

On typical web-site the majority of all transactions is read-only.
If the web site must perform 1000 read-only transactions per second
and 20 read-write transactions per second, then this 20 read-write
transactions can be slightly delayed and then processed in the
batch as single low-level SQLite transaction.

But what if optimistic locking conflict is detected in one
of this 20 transactions? Does it means the others 19 high-level
transaction must be rolled back, as well as low-level SQLite
transaction? No, it is not. Only transaction with conflicts
will be excluded from low-level SQLite transaction, and then
the entire process of this HTTP request will be reiterated
by the web-server.

In the near future I'll try to implement SQLite-oriented ORM
which will demonstrate this approach.


Best regards,
 Alexandermailto:[EMAIL PROTECTED]



Re: [sqlite] INERT OR REPLACE behavior

2006-01-08 Thread Jim C. Nasby
On Sat, Jan 07, 2006 at 02:16:46PM -0500, [EMAIL PROTECTED] wrote:
> Isaac Raway <[EMAIL PROTECTED]> wrote:
> > [I]nstead of dropping the row existing, [the REPLACE algorithm should]
> > simply update the provided fields in place, leaving the unmodified fields
> > as is. I'd call this behavior OR UPDATE as it would be exactly equivalent 
> > to doing an UPDATE on the existing row(s).
> 
> There might be two or more rows that conflict.  How would it choose
> which row to use as the basis?

Not according to the original email...
> When a UNIQUE constraint violation occurs, the pre-existing rows that
are causing the constraint violation are removed prior to inserting or
updating the current row.
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461