Re: [sqlite] Transaction commits hangs up

2018-12-11 Thread Shawn Wagner
Do you still have the issue with a current version of the library?

On Tue, Dec 11, 2018, 12:53 AM Aydin Ozgur Yagmur  I have been using sqlite3.8.11 on fat32 file system. Sometimes, while
> committing a transaction commit process hangs up and database became
> locked. I am trying to find a cause or a solution for this problem. Does
> anyone encounter with similar problem before? Do you have any suggestion or
> any idea for this problem?
>
> Thank you,
> yagmur
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction commits hangs up

2018-12-11 Thread Aydin Ozgur Yagmur
I have been using sqlite3.8.11 on fat32 file system. Sometimes, while
committing a transaction commit process hangs up and database became
locked. I am trying to find a cause or a solution for this problem. Does
anyone encounter with similar problem before? Do you have any suggestion or
any idea for this problem?

Thank you,
yagmur
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction during reading on read-only database

2016-10-06 Thread Klaas Van B.
Ciao Andri,

To keep the results of a query consistent one better starts a transaction. A 
database can be updated by one thread at a time, but read by many.
Suppose you need a list of all employees of a department. If you start to 
search one at a time a lot of changes can happen. An employee can be moved to 
another department, fired or a new one entered. Another reason is speed. All 
(or most if it is a huge department and/or a tiny cache) employees are in cache 
so you're sure you have the requested list showing the situation on the moment 
of creation.

Andrii Motsok wrote Thu, 6 Oct 2016 11:43:10:

>My understanding is that any reading from database automatically starts read 
>transaction.
>The question: why does sqlite need to do it on read-only database?


Kind regards | Vriendelijke groeten | Cordiali saluti,
Klaas `Z4us` van Buiten V, Experienced Freelance ICT-Guy
https://www.linkedin.com/in/klaas-van-buiten-0325b2102
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction during reading on read-only database

2016-10-06 Thread Dominique Pellé
Andrii Motsok  wrote:

> Hi,
>
>
> My understanding is that any reading from database automatically
> starts read transaction. The question: why does sqlite need to do it on 
> read-only database?
>
>
> Regards,
>
> Andrii

You may open a database as read-only, but another process can open it
as read-write.  So SQLite needs to check the lock even when the DB
is opened as read-only.

Now if the database is on a read-only file system, it is possible to
tell SQLite by
opening with file:foo.sqlite?immutable=1  (see https://www.sqlite.org/uri.html)
and that can save some time. But make sure to read the caveats at above URL.

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transaction during reading on read-only database

2016-10-06 Thread Andrii Motsok
Hi,


My understanding is that any reading from database automatically starts read 
transaction. The question: why does sqlite need to do it on read-only database?


Regards,

Andrii
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite transaction time limit

2015-07-13 Thread Iryna Atamanova
Hi guys,

We have found the next feature of sqllite - it transaction takes more that
5 seconds, the database has been locked for some time. Could you please
help us and let me know how we can increase this time - 5 seconds to 30 for
example (what parameter / attribute can be used?)  from source code .

Our project is implemented on .Net CF and we use SQLite version 3.8.

Best regards,
Ira Atamanova


[sqlite] SQLite transaction time limit

2015-07-08 Thread Paul
You are probably asking for 


8 July 2015, 15:24:18, by "Iryna Atamanova" :

>   Hi guys,
> 
> I will be very appreciate for your help.
> 
> We have found the next feature of sqllite - it transaction takes more that
> 5 seconds, the database has been locked for some time. Could you please
> help us and let me know how we can increase this time - 5 seconds to 30 for
> example (what parameter / attribute can be used?)  from source code .
> 
> Our project is implemented on .Net CF and we use SQLite version 3.8.
> 
> Best regards,
> Ira Atamanova
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite transaction time limit

2015-07-08 Thread Iryna Atamanova
Hi guys,

I will be very appreciate for your help.

We have found the next feature of sqllite - it transaction takes more that
5 seconds, the database has been locked for some time. Could you please
help us and let me know how we can increase this time - 5 seconds to 30 for
example (what parameter / attribute can be used?)  from source code .

Our project is implemented on .Net CF and we use SQLite version 3.8.

Best regards,
Ira Atamanova


[sqlite] SQLite transaction time limit

2015-07-08 Thread Kevin Benson
http://permalink.gmane.org/gmane.comp.db.sqlite.general/91411

 'In the connection string, please use "Default Timeout"...'
*SNIP*
 'Also, the "Default Timeout" value is measured in seconds, so it would
be...'
*SNIP*

-> 30 in your case.

--
   --
  --
 --???--
K e V i N

On Wed, Jul 8, 2015 at 4:44 AM, Iryna Atamanova  wrote:

> Hi guys,
>
> I will be very appreciate for your help.
>
> We have found the next feature of sqllite - it transaction takes more that
> 5 seconds, the database has been locked for some time. Could you please
> help us and let me know how we can increase this time - 5 seconds to 30 for
> example (what parameter / attribute can be used?)  from source code .
>
> Our project is implemented on .Net CF and we use SQLite version 3.8.
>
> Best regards,
> Ira Atamanova
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


Re: [sqlite] Transaction issues with WinRT build

2014-07-25 Thread pantelis.geo
Hello,

Could you please provide some example of your code?
I would like to see in what order the commands of the transaction are before
you insert data to the database. It would be very helpful.

begin transaction
queries
commit/rollback

Thank you.



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Transaction-issues-with-WinRT-build-tp63817p76969.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction in one thread and other thread also trying to write data

2014-05-13 Thread J Decker
It works best to have a connection per thread.


On Tue, May 13, 2014 at 6:12 AM, d b  wrote:

> Hi all,
>
>
>My application is multithreaded. It maintains only connection per
> application. Database accessed by single process only. ThreadA will do
> database write operations(bulk) in a transaction. ThreadB will do single
> write operation without transaction but same connection pointer.
>
>Here, application needs to synchronize the calls among threads by using
> synchronization technique(critical_section/mutex)? (OR) begin transaction
> and commit will synchronize the calls between threads?
>
>Please suggest.
>
> Thanks,
> av.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction in one thread and other thread also trying to write data

2014-05-13 Thread Igor Tandetnik

On 5/13/2014 9:12 AM, d b wrote:

My application is multithreaded. It maintains only connection per
application. Database accessed by single process only. ThreadA will do
database write operations(bulk) in a transaction. ThreadB will do single
write operation without transaction but same connection pointer.


Transaction is a property of a connection, not a thread. If ThreadA 
started an explicit transaction on a connection, then writes by ThreadB 
on the same connection will also become part of that transaction.



Here, application needs to synchronize the calls among threads by using
synchronization technique(critical_section/mutex)? (OR) begin transaction
and commit will synchronize the calls between threads?


Individual sqlite3_* API calls on a given connection are serialized by 
SQLite: each call locks a mutex associated with the connection on entry, 
and unlocks it right before returning. If you need more synchronization 
than that, you would have to provide it yourself.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction in one thread and other thread also trying to write data

2014-05-13 Thread Constantine Yannakopoulos
On Tue, May 13, 2014 at 4:12 PM, d b  wrote:

>My application is multithreaded. It maintains only connection per
> application. Database accessed by single process only. ThreadA will do
> database write operations(bulk) in a transaction. ThreadB will do single
> write operation without transaction but same connection pointer.
>
>Here, application needs to synchronize the calls among threads by using
> synchronization technique(critical_section/mutex)? (OR) begin transaction
> and commit will synchronize the calls between threads?
>
>Please suggest.
>

Please read this: http://www.sqlite.org/threadsafe.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transaction in one thread and other thread also trying to write data

2014-05-13 Thread d b
Hi all,


   My application is multithreaded. It maintains only connection per
application. Database accessed by single process only. ThreadA will do
database write operations(bulk) in a transaction. ThreadB will do single
write operation without transaction but same connection pointer.

   Here, application needs to synchronize the calls among threads by using
synchronization technique(critical_section/mutex)? (OR) begin transaction
and commit will synchronize the calls between threads?

   Please suggest.

Thanks,
av.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behaviour

2014-01-23 Thread Joseph L. Casale
> SQLite transaction is a property of a database connection - there ain't
> no such thing as a transaction spanning multiple connections.

In hindsight, I completely overlooked that very point. Between then and
now it's all been refactored to do this correctly.

Thanks everybody!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behaviour

2014-01-23 Thread Igor Tandetnik

On 1/23/2014 5:12 PM, Joseph L. Casale wrote:

The bulk load doesn't have to be done this way, only one process loads data
and even a single connection can be used but that would segment the wrapper.


SQLite transaction is a property of a database connection - there ain't 
no such thing as a transaction spanning multiple connections. So, if you 
want to utilize SQLite transactions as you atomicity mechanism, then all 
changes will have to be made on a single connection. If for whatever 
reason you want to make changes atomically on multiple connections, then 
you face an unenviable task of designing your own atomicity mechanism - 
I don't believe SQLite provides anything to help you with this, and in 
fact will fight against you (only one connection can write to a given 
database file at any given time). Choose your poison.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behaviour

2014-01-23 Thread Joseph L. Casale
> Start the bulk load with "BEGIN IMMEDIATE;" or "BEGIN
> EXCLUSIVE;" and count errors.
> If there were no errors at the end of the bulk load, issue
> "COMMIT;", else issue "ROLLBACK".
> If the program crashes, sqlite will take care of the rollback
> automatically the next time any process opens the database.
> 
> http://sqlite.org/c3ref/get_autocommit.html can help you decide
> what mode you are in.
> 
> By the way, autocommit is not the best thing to do if you have
> related insert/update/delete statements in an application
> transaction. Related updates should be in the same, atomic,
> transaction.

Hi Kees and Igor,

I appreciate the atomic violation of the current implementation.

The issue lies in the fact the individual rows for each table consume a cursor
and are wrapped in a context manager which starts its own transaction.

The wrapper that abstracts the user from raw sql, connections and cursors
hands out individual connections as the data's consuming access is
multiprocess based and to work around the concurrency issues I have no
choice.

The bulk load doesn't have to be done this way, only one process loads data
and even a single connection can be used but that would segment the wrapper.

So either that or temp tables / other schema changes might be needed in this
case. I'm looking at refactoring properly for the sake of correctness.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behaviour

2014-01-23 Thread Kees Nuyt
On Thu, 23 Jan 2014 16:30:57 +, "Joseph L. Casale"
 wrote:

>I have a scenario where I am writing a series of entries across several tables
>with relationships using Python and context managers. The sql is abstracted
>away from the user by a class providing all the needed methods. These each
>open implicit transactions, now its been asked that during the bulk loading
>process, we wrap it all up in a transaction so nothing will be committed in
>the event of some problem during the load.
>
>This presents a problem as far as I know, aside from extending the schema
>with a table to indicate state that is updated upon completion, is there 
>anything
>about transactions I am not seeing where I can accomplish leaving the bulk
>load uncommitted in the event of an issue in my case?

Start the bulk load with "BEGIN IMMEDIATE;" or "BEGIN
EXCLUSIVE;" and count errors.
If there were no errors at the end of the bulk load, issue
"COMMIT;", else issue "ROLLBACK".
If the program crashes, sqlite will take care of the rollback
automatically the next time any process opens the database.

http://sqlite.org/c3ref/get_autocommit.html can help you decide
what mode you are in.

By the way, autocommit is not the best thing to do if you have
related insert/update/delete statements in an application
transaction. Related updates should be in the same, atomic,
transaction.

>Thanks,
>jlc 

-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behaviour

2014-01-23 Thread Igor Tandetnik

On 1/23/2014 11:30 AM, Joseph L. Casale wrote:

is there anything
about transactions I am not seeing where I can accomplish leaving the bulk
load uncommitted in the event of an issue in my case?


I'm not sure I understand the nature of the difficulty. If you don't 
want to COMMIT a transaction, then you ROLLBACK it.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction behaviour

2014-01-23 Thread Joseph L. Casale
I have a scenario where I am writing a series of entries across several tables
with relationships using Python and context managers. The sql is abstracted
away from the user by a class providing all the needed methods. These each
open implicit transactions, now its been asked that during the bulk loading
process, we wrap it all up in a transaction so nothing will be committed in
the event of some problem during the load.

This presents a problem as far as I know, aside from extending the schema
with a table to indicate state that is updated upon completion, is there 
anything
about transactions I am not seeing where I can accomplish leaving the bulk
load uncommitted in the event of an issue in my case?

Thanks,
jlc 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction involving multiple attached databases

2013-11-26 Thread Hick Gunter
Not without reading the source code and patching it at a very deep level.

In our setup we use virtual tables almost exclusively and these do not support 
transactions. There are also dedicated writers for each set of tables, so table 
locking outside of SQLite is sufficient. The only changes that require SQLite 
to lock the database file(s) are metadata changes (CREATE VIRTUAL TABLE and 
DROP TABLE). I have added a second bitmap to the code generator that allows me 
to suppress the Cookie opcode unless a VCreate or VDestroy opcode is generated 
on a specific database file.

There are some caveats and this is not quite ACID but sufficient for our 
application, which uses SQL as a universal query interface for application 
status, report generation and configuration updates.

-Ursprüngliche Nachricht-
Von: Joshua Grauman [mailto:jnf...@grauman.com]
Gesendet: Dienstag, 26. November 2013 20:25
An: sqlite-users@sqlite.org
Betreff: [sqlite] Transaction involving multiple attached databases

Hello all,

If I have multiple databases attached and then do a:
BEGIN EXCLUSIVE

I assume all the sqlite3 tables involved get locked?
Is there a way to lock only one of the attached tables?

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


--
 Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 – 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction involving multiple attached databases

2013-11-26 Thread Peter Aronson
And you can't execute it at all if any of the attached databases are read-only.
 
Peter


>On 11/26/2013 2:24 PM, Joshua Grauman wrote:
>> If I have multiple databases attached and then do a:
>> BEGIN EXCLUSIVE
>> 
>> I assume all the sqlite3 tables involved get locked?
>
>All the database files are locked.
>
>> Is there a way to lock only one of the attached tables?
>
>I'm pretty sure there isn't, short of creating a connection directly to a 
>single database and not attaching any others to it.
>-- Igor Tandetnik
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction involving multiple attached databases

2013-11-26 Thread Igor Tandetnik

On 11/26/2013 2:24 PM, Joshua Grauman wrote:

If I have multiple databases attached and then do a:
BEGIN EXCLUSIVE

I assume all the sqlite3 tables involved get locked?


All the database files are locked.


Is there a way to lock only one of the attached tables?


I'm pretty sure there isn't, short of creating a connection directly to 
a single database and not attaching any others to it.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction involving multiple attached databases

2013-11-26 Thread Joshua Grauman

Hello all,

If I have multiple databases attached and then do a:
BEGIN EXCLUSIVE

I assume all the sqlite3 tables involved get locked?
Is there a way to lock only one of the attached tables?

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction Handling

2013-07-31 Thread techi eth
Is it possible to use Sqlite3 transaction capability based on actual return
of individual transaction query execution?

 Example:

BEGIN

SELECT for reading the database (Here I will get data in my read call back
function Based on data status I may initiate other transaction, like below
Update)

UPDATE

COMMIT

Note :This is implemented by C wrapper function of handling individual
operation like Select, Update, Drop or Insert.

Cheers-

Techi
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction

2013-06-08 Thread Simon Slavin

On 8 Jun 2013, at 5:27pm, RSmith  wrote:

> Thank you very much for taking the time - this did answer my question.

You're welcome.

> If I may one more question related to this:  Let's say I am using 
> transactions as validation stops on a long list of inputs - So
> that the main program does some checking along the way, I have two choices, I 
> can start with a BEGIN and do COMMIT at the very end,
> and possibly ROLLBACK everything once I get an error and start over. 
> Alternatively (and this is obviously what I want to do), I can
> do BEGIN, then check say 1000 records, then COMMIT, then BEGIN, check 1000, 
> COMMIT, etc. This way if I get an error, I need only
> rollback a little bit.
> 
> The question is, how high is the overhead on BEGIN-COMMITs?

I don't know the really detailed answer to this.  If you care down to the 
millisecond then the answer depends on which journaling mode you're using, and 
on technical details about your operating system and file system.  The problem 
is that the actual time varies tremendously depending on how cheap and nasty 
your hard disk is, whether your databus is choked and other stuff we can't 
guess.

Frequent advice on this list is to assume that you will want to batch your 
transactions somehow, and put your 1000 into a variable you can change.  Then 
try your app with several different values for the variable.  If I understand 
your description then setting this variable to 100 would be equivalent to 
your other scenario: just doing everything in one huge batch.  You don't make 
complicated changes each time, you just change the number in one line of code.

The opposite advice is this: version 1 of your code should ignore timing and 
just do whatever reflects the way your data relates to real life.  Just think 
about what your data really 'means' and use transactions and SAVEPOINTs that 
reflect that.  Then run your app.  A second or two either way isn't anything to 
care about.  Only if your app is /unacceptably/ slow do you then have to start 
worrying about writing contra-intuitive code.

And with those two pieces of advice, I hope I have covered all bases in an 
unbiassed manner.  I hope you can forgive me for the fact that they contradict 
one-another.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction

2013-06-08 Thread RSmith

Apologies, this went to wrong mailbox first...
--

Thank you very much for taking the time - this did answer my question.

If I may one more question related to this:  Let's say I am using transactions 
as validation stops on a long list of inputs - So
that the main program does some checking along the way, I have two choices, I 
can start with a BEGIN and do COMMIT at the very end,
and possibly ROLLBACK everything once I get an error and start over. 
Alternatively (and this is obviously what I want to do), I can
do BEGIN, then check say 1000 records, then COMMIT, then BEGIN, check 1000, 
COMMIT, etc. This way if I get an error, I need only
rollback a little bit.

The question is, how high is the overhead on BEGIN-COMMITs?
In other words... Should I try to maybe change the program to do all checking 
in one pass, then write to the DB in another pass?
(which is better if Transaction overhead is high, but worse if Transaction 
overhead is negligible).

Thanks again!

On 2013/06/08 17:16, Simon Slavin wrote:

 On 8 Jun 2013, at 3:35pm, RSmith   wrote:
 These two pieces of code do not do the same thing.

 (A) talks about three sets of changes: one overall set, which includes two 
different subsets.  Consider you were updating your league tables after a day 
in which two tennis matches were played.  You need to make a change to details 
about match A, then a change to details about match B, and you also need to 
make changes to your league positions.  The change to details about  match A 
have nothing to do with the change to details about match B.  So you have 
released SAVEPOINT A before you start SAVEPOINT B.

 (B) talks about a nested set of changes.  You do not want the change to 
SAVEPOINT A to happen unless the change to SAVEPOINT B worked.  Here, suppose 
you are recording the result of a race.  You have to modify both the details of 
the race and the individual stats of the winner.  Here, if there is a problem 
with modifying the result of the race (SAVEPOINT B) then you need to make sure 
that you do not modify the stats of the individual runner (SAVEPOINT A)

 As you can see, the two options are suitable for different situations.  You 
should use whatever structure reflects the changes you are making.

 But your basic question was about how long these things took.  Assuming that 
you do reach RELEASE (and not ROLLBACK) on both, they will take near-enough the 
same time.  However, the BEGIN/COMMIT in (B) is not needed because all the 
changes made are in at least one SAVEPOINT.  Leaving it out may speed up your 
program slightly.  THe outer SAVEPOINT has exactly the same effect as having a 
BEGIN/COMMIT.

 Simon.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction

2013-06-08 Thread Simon Slavin

On 8 Jun 2013, at 3:35pm, RSmith  wrote:

> A) so to have:
> BEGIN
>  SAVEPOINT A
>  ...
>  RELEASE A
>  SAVEPOINT B
>  ...
>  RELEASE B
>  ...
> COMMIT;
> 
> B) or to have:
> BEGIN
>  SAVEPOINT A
>   ...
>SAVEPOINT B
>  ...
>RELEASE B
>  ...
>  RELEASE A
> COMMIT;
> 
> I realise both ways have code efficiency issues, but just is there a real 
> execution speed difference?

These two pieces of code do not do the same thing.

(A) talks about three sets of changes: one overall set, which includes two 
different subsets.  Consider you were updating your league tables after a day 
in which two tennis matches were played.  You need to make a change to details 
about match A, then a change to details about match B, and you also need to 
make changes to your league positions.  The change to details about  match A 
have nothing to do with the change to details about match B.  So you have 
released SAVEPOINT A before you start SAVEPOINT B.

(B) talks about a nested set of changes.  You do not want the change to 
SAVEPOINT A to happen unless the change to SAVEPOINT B worked.  Here, suppose 
you are recording the result of a race.  You have to modify both the details of 
the race and the individual stats of the winner.  Here, if there is a problem 
with modifying the result of the race (SAVEPOINT B) then you need to make sure 
that you do not modify the stats of the individual runner (SAVEPOINT A)

As you can see, the two options are suitable for different situations.  You 
should use whatever structure reflects the changes you are making.

But your basic question was about how long these things took.  Assuming that 
you do reach RELEASE (and not ROLLBACK) on both, they will take near-enough the 
same time.  However, the BEGIN/COMMIT in (B) is not needed because all the 
changes made are in at least one SAVEPOINT.  Leaving it out may speed up your 
program slightly.  THe outer SAVEPOINT has exactly the same effect as having a 
BEGIN/COMMIT.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction

2013-06-08 Thread RSmith

Ugh, of course, stupid mistake in explanation -
Of course it must be SAVEPOINT, not again BEGIN.

A) so to have:
BEGIN
  SAVEPOINT A
  ...
  RELEASE A
  SAVEPOINT B
  ...
  RELEASE B
  ...
COMMIT;

B) or to have:
BEGIN
  SAVEPOINT A
   ...
SAVEPOINT B
  ...
RELEASE B
  ...
  RELEASE A
COMMIT;

I realise both ways have code efficiency issues, but just is there a real 
execution speed difference?


On 2013/06/08 16:20, Igor Tandetnik wrote:

On 6/8/2013 10:17 AM, RSmith wrote:

Could someone please shortly discuss performance considerations for
having nested Transactions vs. sequential transactions in SQLite.


There ain't no such thing as a nested transaction. The second BEGIN statement 
will simply fail.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction

2013-06-08 Thread Igor Tandetnik

On 6/8/2013 10:17 AM, RSmith wrote:

Could someone please shortly discuss performance considerations for
having nested Transactions vs. sequential transactions in SQLite.


There ain't no such thing as a nested transaction. The second BEGIN 
statement will simply fail.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction

2013-06-08 Thread RSmith

Hi all,

I've read some of the standard help on sqlite.org but have not studied the code at all, so turning to the experts for some advice 
(there is no urgency, I would just really like some insight).


Could someone please shortly discuss performance considerations for having nested Transactions vs. sequential transactions in 
SQLite. I realise that nested ones are safer, but Im always looking to find performance increases. Now, to my mind, in the following 
two examples the safety is no issue:

A)
  Begin
  Begin
 ...
  Commit;
  Begin
 ...
  Commit;
  Begin
 ...
  Commit;
  Commit;

B)
  Begin
  Begin
  Begin
 Begin
 ...
  Commit;
  
  Commit;
  
  Commit;
  
  Commit;

- I think that B will be faster, or at least, that's what I seem to glean from 
the help texts - but I'm not very sure.
Is B faster or A faster? and if so, are there circumastances in which this 
might differ?
Any other considerations needed to keep in mind?

Thank you very much and have a great day!
RS

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction question

2013-04-26 Thread Igor Tandetnik

On 4/26/2013 7:18 PM, Igor Korot wrote:

If I'm inside transaction and want to insert a record, will I see right
after executing sqlite3_step() or I have to execute a "COMMIT" statement to
see it?


The connection that executed sqlite3_step() would see its own changes 
right away. Other connections would only see changes after they've been 
committed. This is "I" in ACID.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction question

2013-04-26 Thread Igor Korot
Hi, ALL,
If I'm inside transaction and want to insert a record, will I see right
after executing sqlite3_step() or I have to execute a "COMMIT" statement to
see it?

Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-25 Thread Pavel Ivanov
I guess this can stand as one more argument for not using SQLite over
NFS. Another argument could be found here:
http://www.sqlite.org/lockingv3.html#how_to_corrupt.

Pavel


On Wed, Oct 24, 2012 at 2:01 PM, Efim Dyadkin  wrote:
> I was probably not quite specific. So I would like to rephrase the problem 
> and give more details.
>
> I run a default configuration of Sqlite on Linux and I don't use WAL or 
> anything that changes transaction behavior. A database is located on a remote 
> file server and accessed via NFS. There is a single process only that 
> accesses the database. The problem is when auto-mounter is stopped (on a 
> machine where the process is running) in the middle of database transaction, 
> commit returns successfully though it leaves a hot journal on disk because it 
> is not seen for unlink (ENOENT)!!!
>
> Note, when auto-mounter is down, you still can operate on a database file and 
> its journal if they were opened before the stop.
>
> The problem can be reproduced very easily with an Sqlite command shell:
> 1) Run the shell
> 2) Begin exclusive transaction
> 3) Insert a few records
> 4) Stop auto-mounter
> 5) Optionally insert a few more records (stopped auto-mounter does not affect 
> these inserts because database and journal are opened)
> 6) Commit (no errors!!!)
> 7) Quit the shell
>
> If you then restart auto-mounter and check the database directory you will 
> find a hot journal! If you reopen the database the transaction will be played 
> back and recently commited changes will be gone!
>
> Thanks for your feedbacks,
> Efim
>
>
>
> 
>  This e-mail, including any attached files, may contain confidential and 
> privileged information for the sole use of the intended recipient. Any 
> review, use, distribution, or disclosure by others is strictly prohibited. If 
> you are not the intended recipient (or authorized to receive information for 
> the intended recipient), please contact the sender by reply e-mail and delete 
> all copies of this message.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-25 Thread Efim Dyadkin
I was probably not quite specific. So I would like to rephrase the problem and 
give more details.

I run a default configuration of Sqlite on Linux and I don't use WAL or 
anything that changes transaction behavior. A database is located on a remote 
file server and accessed via NFS. There is a single process only that accesses 
the database. The problem is when auto-mounter is stopped (on a machine where 
the process is running) in the middle of database transaction, commit returns 
successfully though it leaves a hot journal on disk because it is not seen for 
unlink (ENOENT)!!!

Note, when auto-mounter is down, you still can operate on a database file and 
its journal if they were opened before the stop.

The problem can be reproduced very easily with an Sqlite command shell:
1) Run the shell
2) Begin exclusive transaction
3) Insert a few records
4) Stop auto-mounter
5) Optionally insert a few more records (stopped auto-mounter does not affect 
these inserts because database and journal are opened)
6) Commit (no errors!!!)
7) Quit the shell

If you then restart auto-mounter and check the database directory you will find 
a hot journal! If you reopen the database the transaction will be played back 
and recently commited changes will be gone!

Thanks for your feedbacks,
Efim




 This e-mail, including any attached files, may contain confidential and 
privileged information for the sole use of the intended recipient. Any review, 
use, distribution, or disclosure by others is strictly prohibited. If you are 
not the intended recipient (or authorized to receive information for the 
intended recipient), please contact the sender by reply e-mail and delete all 
copies of this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-22 Thread Clemens Ladisch
Igor Korot wrote:
> On Sun, Oct 21, 2012 at 11:01 PM, Darren Duncan  
> wrote:
>> If you want a rollback on the inner to rollback the outer unconditionally,
>> you never needed a nested transaction in the first place, but rather just
>> the one outer transaction 1,
>
> 1. Query all tables to grab the data according to table1_master.id
> 2. Present the data on screen and let the user edit the data. Delete
> the data retrieved in step 1.
> 3. Grab all the data in the appropriate class.
> 4. Start transaction 1 to insert the data.
> 5. Start transaction 2 to delete the data from tableN_slave
> 6. If 5 fails, rollback transaction 2, then transaction 1 (i.e. don't
> go with tr.1)
> 7. If 5 succeeds continue with transaction 1.
>
> What would be the best approach from sqlite point of view?

As Darren said, just do nothing for the inner transaction:
1. Query all tables to grab the data according to table1_master.id
2. Present the data on screen and let the user edit the data. Delete
   the data retrieved in step 1.
3. Grab all the data in the appropriate class.
4. Start transaction 1 to insert the data.
5. delete the data from tableN_slave
6. If 5 fails, rollback transaction 1 (i.e. don't go with tr.1)
7. If 5 succeeds continue with transaction 1.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-22 Thread Igor Korot
Darren,

On Sun, Oct 21, 2012 at 11:01 PM, Darren Duncan  wrote:
> Igor Korot wrote:
>>
>> Hi, ALL,
>> Is it possible to have transaction inside transaction? Will it be
>> handled correctly?
>>
>> What I mean is: crate transaction 1 by issuing "BEGIN", create
>> transaction 2 by issuing "BEGIN".
>> Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
>> If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
>> If "COMMIT" is issued, continue transaction 1.
>>
>> Thank you.
>
>
> Didn't you mean to say, "regardless of how (inner) transaction 2 ends,
> continue transaction 1"?
>
> Being able to rollback part of a main transaction without rolling back the
> whole thing is the whole point of nested transactions.  (Likewise for
> "savepoints", which are different syntax but the same functionality.)
>
> If you want a rollback on the inner to rollback the outer unconditionally,
> you never needed a nested transaction in the first place, but rather just
> the one outer transaction 1, or you might have wanted an autonomous
> transaction, which is different, depending on what you want.

OK, consider the following scenario:

table1_master( id integer primary key,  )
table2_slave( slave_id integer primary key, table1_master_id foreign
key,  )
table3_slave( slave_id integer primary key, table1_master_id foreign
key,  )

With an insert everything is OK. I have one transaction starting from
bottom up to satisfy foreign key constraints.
But if I want to edit the data my algorithm is:

1. Query all tables to grab the data according to table1_master.id
2. Present the data on screen and let the user edit the data. Delete
the data retrieved in step 1.
3. Grab all the data in the appropriate class.
4. Start transaction 1 to insert the data.
5. Start transaction 2 to delete the data from tableN_slave
6. If 5 fails, rollback transaction 2, then transaction 1 (i.e. don't
go with tr.1)
7. If 5 succeeds continue with transaction 1.

What would be the best approach from sqlite point of view?

Thank you.

>
> -- Darren Duncan
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-22 Thread Darren Duncan

Igor Korot wrote:

Hi, ALL,
Is it possible to have transaction inside transaction? Will it be
handled correctly?

What I mean is: crate transaction 1 by issuing "BEGIN", create
transaction 2 by issuing "BEGIN".
Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
If "COMMIT" is issued, continue transaction 1.

Thank you.


Didn't you mean to say, "regardless of how (inner) transaction 2 ends, continue 
transaction 1"?


Being able to rollback part of a main transaction without rolling back the whole 
thing is the whole point of nested transactions.  (Likewise for "savepoints", 
which are different syntax but the same functionality.)


If you want a rollback on the inner to rollback the outer unconditionally, you 
never needed a nested transaction in the first place, but rather just the one 
outer transaction 1, or you might have wanted an autonomous transaction, which 
is different, depending on what you want.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-21 Thread Igor Korot
Dan,

On Sun, Oct 21, 2012 at 9:40 PM, Dan Kennedy  wrote:
> On 10/22/2012 11:34 AM, Igor Korot wrote:
>>
>> Hi, ALL,
>> Is it possible to have transaction inside transaction? Will it be
>> handled correctly?
>>
>> What I mean is: crate transaction 1 by issuing "BEGIN", create
>> transaction 2 by issuing "BEGIN".
>> Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
>> If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
>> If "COMMIT" is issued, continue transaction 1.
>
>
> See here:
>
>   http://www.sqlite.org/lang_savepoint.html
>
> They don't work exactly as you describe above, but might be
> close enough.

Looks like this is what I need.
However I am not sure why you said "They don't work exactly as described..."

Thank you.

>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction inside transaction

2012-10-21 Thread Dan Kennedy

On 10/22/2012 11:34 AM, Igor Korot wrote:

Hi, ALL,
Is it possible to have transaction inside transaction? Will it be
handled correctly?

What I mean is: crate transaction 1 by issuing "BEGIN", create
transaction 2 by issuing "BEGIN".
Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
If "COMMIT" is issued, continue transaction 1.


See here:

  http://www.sqlite.org/lang_savepoint.html

They don't work exactly as you describe above, but might be
close enough.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction inside transaction

2012-10-21 Thread Igor Korot
Hi, ALL,
Is it possible to have transaction inside transaction? Will it be
handled correctly?

What I mean is: crate transaction 1 by issuing "BEGIN", create
transaction 2 by issuing "BEGIN".
Close transaction 2 by issuing either "COMMIT" or "ROLLBACK".
If "ROLLBACK" is issued, issue "ROLLBACK" on transaction 1.
If "COMMIT" is issued, continue transaction 1.

Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-20 Thread Ryan Johnson

On 19/10/2012 4:40 PM, Efim Dyadkin wrote:


Hi Richard,

You are right about the purpose of unlink but it is out of context. 
There are a transaction in progress and hot journal on disk. If 
journal can't be deleted by the end of transaction, the transaction 
can't be considered to be successfully finished. But current 
implementation of Sqlite considers this transaction as successfully 
committed!


The problem with file system going down during a transaction is a real 
problem that occurred to our customers. For them it was absolutely a 
disaster because they lost data they had successfully saved.


I guess unixDelete must propagate ENOENT error when it runs in the 
context of transaction. For applications it will be like "transaction 
failed due to IO error".


For a test I tried to remove "errno != ENOENT" unconditionally from 
unixDelete but immediately encountered a problem. When I tried to 
execute any command on an empty database e.g. "create table a (a1 
text)" I got I/O error. This is because Sqlite tried to remove 
non-existsing "--wal" file


I think you're on the right track: pagerOpenWalIfPresent() attempts to 
delete a file that may not exist (the comments say so!), so it should 
have been expecting the occasional ENOENT.


Ryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-20 Thread Dan Kennedy

On 10/20/2012 09:14 PM, Simon Slavin wrote:


On 19 Oct 2012, at 9:40pm, Efim Dyadkin
wrote:


You are right about the purpose of unlink but it is out of context.
There are a transaction in progress and hot journal on disk. If
journal can't be deleted by the end of transaction, the transaction
can't be considered to be successfully finished.


This is not correct.  SQLite does not close the journal file at the
end of every transaction unless you have only a single connection to
the database and the journal mode set to DELETE, and that is not
common these days because creating and deleting files is so slow.
The times you should see a journal file deleted is when all
connections to the database have been closed: you've done a _close()
for every _open().


In WAL mode, the WAL file is held open by a connection until it
is closed. And only deleted after the number of connections to the
database drops to zero (the last connection usually unlinks it as
part of the sqlite3_close() call).

If you're running with journal_mode=DELETE (the default), the journal
file is always closed and unlinked at the end of each write
transaction.

If using journal_mode=TRUNCATE or journal_mode=PERSIST on unix, then
the journal file is always closed at the end of a write transaction.
However, on win32 (or any other system where it is impossible to
delete a file while the current or some other process has it open),
the journal file file descriptor may be held open between transactions.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-20 Thread Simon Slavin

On 19 Oct 2012, at 9:40pm, Efim Dyadkin  wrote:

> You are right about the purpose of unlink but it is out of context. There are 
> a transaction in progress and hot journal on disk. If journal can't be 
> deleted by the end of transaction, the transaction can't be considered to be 
> successfully finished.

This is not correct.  SQLite does not close the journal file at the end of 
every transaction unless you have only a single connection to the database and 
the journal mode set to DELETE, and that is not common these days because 
creating and deleting files is so slow.  The times you should see a journal 
file deleted is when all connections to the database have been closed: you've 
done a _close() for every _open().



You can ignore things like journal files, which file data is stored in, and 
where in the file it's stored.  SQLite should be creating and deleting its own 
files as it sees fit, and if you create, edit or delete any of them yourself 
then you can consider your database to be potentially wrong or corrupt.  Don't 
mess with any SQLite files.

>  But current implementation of Sqlite considers this transaction as 
> successfully committed!

This is how to know that a transaction is finished: you execute the SQL command 
"COMMIT" and the function call returns with the code SQLITE_OK rather than an 
error.  If this has happened, the transaction is successfully committed.  
Whether you happen to notice files on disk being created, resized, deleted or 
deletable is useful for monitoring whether database connections are open, but 
not to individual transactions.

> The problem with file system going down during a transaction is a real 
> problem that occurred to our customers. For them it was absolutely a disaster 
> because they lost data they had successfully saved.

If the file system goes down /during/ a transaction, your application did not 
save data.  Your application's user interface should not be indicating to the 
user that data is saved during a transaction.  It should do that only once the 
"COMMIT" is reported successful.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-20 Thread Efim Dyadkin
Hi Richard,

You are right about the purpose of unlink but it is out of context. There are a 
transaction in progress and hot journal on disk. If journal can't be deleted by 
the end of transaction, the transaction can't be considered to be successfully 
finished. But current implementation of Sqlite considers this transaction as 
successfully committed!

The problem with file system going down during a transaction is a real problem 
that occurred to our customers. For them it was absolutely a disaster because 
they lost data they had successfully saved.

I guess unixDelete must propagate ENOENT error when it runs in the context of 
transaction. For applications it will be like "transaction failed due to IO 
error".

For a test I tried to remove "errno != ENOENT" unconditionally from unixDelete 
but immediately encountered a problem. When I tried to execute any command on 
an empty database e.g. "create table a (a1 text)" I got I/O error. This is 
because Sqlite tried to remove non-existsing "-wal" file with this stack trace:

unixDelete,  FP=7fff44f133e0
sqlite3OsDelete, FP=7fff44f13420
pagerOpenWalIfPresent,   FP=7fff44f13460
sqlite3PagerSharedLock,  FP=7fff44f13520
lockBtree,   FP=7fff44f135d0
sqlite3BtreeBeginTrans,  FP=7fff44f13650
sqlite3InitOne,  FP=7fff44f13770
sqlite3Init, FP=7fff44f137b0
sqlite3ReadSchema,   FP=7fff44f137e0
sqlite3StartTable,   FP=7fff44f13980
yy_reduce,   FP=7fff44f140f0
sqlite3Parser,   FP=7fff44f14170
sqlite3RunParser,FP=7fff44f14220
sqlite3Prepare,  FP=7fff44f14300
sqlite3LockAndPrepare,   FP=7fff44f143a0
sqlite3_prepare, FP=7fff44f14410
sqlite3_exec,FP=7fff44f14520

Thanks,
Efim Dyadkin

 This e-mail, including any attached files, may contain confidential and 
privileged information for the sole use of the intended recipient. Any review, 
use, distribution, or disclosure by others is strictly prohibited. If you are 
not the intended recipient (or authorized to receive information for the 
intended recipient), please contact the sender by reply e-mail and delete all 
copies of this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Ryan Johnson

On 18/10/2012 8:45 AM, Richard Hipp wrote:

On Wed, Oct 17, 2012 at 4:11 PM, Efim Dyadkin  wrote:


Hi,

I am testing loss of data in Sqlite database correlated to auto-mounter
malfunction. I am running Sqlite on Linux and my database file is located
on network disk. For a test I stop the auto-mounter right before
transaction is committed. Surprisingly commit succeeds without any error
although hot journal remains on disk. When I get auto-mounter back and open
my database again the transaction is rolled back.

Apparently Sqlite cannot remove the journal due to unmounted path but it
ignores this error because Linux classifies it as ENOENT and unixDelete
function disregards it:

if( unlink(zPath)==(-1)) && errno!=ENOENT ){
 return unixLogError(SQLITE_IOERR_DELETE,
"unlink", zPath);
}

Can somebody please explain why "errno!=ENOENT" is required in here?


The purpose of unlink() is to make it so that the file does not exist.
ENOENT indicates that the file does not exist, and so the purpose of the
unlink() call has been fulfilled.

Suppose SQLite did treat ENOENT as an error.  What could it do about it?
It cannot roll the transaction back because the rollback journal (and
indeed the entire database) has disappeared.  I guess we could return
SQLITE_CORRUPT.  Would that somehow be more useful to the application?
Is there some plausible scenario for which an active journal file gone 
AWOL does *not* indicate a serious problem? To me it indicates that Bad 
Things are going on that sqlite is ill-equipped to deal with, so the 
best it can do is avoid causing any collateral damage by attempting to 
continue normally. Especially if the filesystem went down: it's not like 
any future transaction would succeed anyway...


If a less heavy-handed approach is desirable, perhaps a failed unlink() 
call should trigger an fstat() or seek() on the offending file 
descriptor; that might distinguish whether the file itself is 
inaccessible (as in OP's case) or whether it's just unreachable at the 
expected path (e.g. due to interference from an external agent).


I would still favor a fail-fast approach that returns a scary error 
message, though, the same kind that would be returned if a write failed.


$0.02
Ryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Richard Hipp
On Wed, Oct 17, 2012 at 4:11 PM, Efim Dyadkin  wrote:

> Hi,
>
> I am testing loss of data in Sqlite database correlated to auto-mounter
> malfunction. I am running Sqlite on Linux and my database file is located
> on network disk. For a test I stop the auto-mounter right before
> transaction is committed. Surprisingly commit succeeds without any error
> although hot journal remains on disk. When I get auto-mounter back and open
> my database again the transaction is rolled back.
>
> Apparently Sqlite cannot remove the journal due to unmounted path but it
> ignores this error because Linux classifies it as ENOENT and unixDelete
> function disregards it:
>
> if( unlink(zPath)==(-1)) && errno!=ENOENT ){
> return unixLogError(SQLITE_IOERR_DELETE,
> "unlink", zPath);
> }
>
> Can somebody please explain why "errno!=ENOENT" is required in here?
>

The purpose of unlink() is to make it so that the file does not exist.
ENOENT indicates that the file does not exist, and so the purpose of the
unlink() call has been fulfilled.

Suppose SQLite did treat ENOENT as an error.  What could it do about it?
It cannot roll the transaction back because the rollback journal (and
indeed the entire database) has disappeared.  I guess we could return
SQLITE_CORRUPT.  Would that somehow be more useful to the application?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transaction commit is successful despite I/O error

2012-10-18 Thread Efim Dyadkin
Hi,

I am testing loss of data in Sqlite database correlated to auto-mounter 
malfunction. I am running Sqlite on Linux and my database file is located on 
network disk. For a test I stop the auto-mounter right before transaction is 
committed. Surprisingly commit succeeds without any error although hot journal 
remains on disk. When I get auto-mounter back and open my database again the 
transaction is rolled back.

Apparently Sqlite cannot remove the journal due to unmounted path but it 
ignores this error because Linux classifies it as ENOENT and unixDelete 
function disregards it:

if( unlink(zPath)==(-1)) && errno!=ENOENT ){
return unixLogError(SQLITE_IOERR_DELETE, 
"unlink", zPath);
}

Can somebody please explain why "errno!=ENOENT" is required in here?

Thanks for any help,
Efim Dyadkin


 This e-mail, including any attached files, may contain confidential and 
privileged information for the sole use of the intended recipient. Any review, 
use, distribution, or disclosure by others is strictly prohibited. If you are 
not the intended recipient (or authorized to receive information for the 
intended recipient), please contact the sender by reply e-mail and delete all 
copies of this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-21 Thread Lukas
Simon Slavin  writes:

> Is there a chance that you simply have an UPDATE command with a bad syntax ? 
You say it works fine on one
> implementation of 3.7.13 but not the other ?  I'm mystified by what you're
seeing but I hope someone else can help.

Thanks Simon,

the update statements are fine. It was an issue with WinRT version using the
wrong temp directoy by default. A workaround is to change temp dir using a
pragma. Hopeflly this will get fixed in the next version.

Kind Regards, Lukas

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-21 Thread Lukas
Joe Mistachkin  writes:

> Setting the sqlite3_temp_directory to the value contained in the
> "Windows.Storage.ApplicationData.Current.TemporaryFolder.Path" property
> should clear the issue.  This can be done immediately after opening the
> connection using PRAGMA temp_store_directory command on the newly opened
> database connection.

Hi Joe, thank you, that did the trick!

Will this be fixed in the next version? I think for the WinRT build, it should
default to that directory, because it cannot access any other temp directory
anyways.

The documentation for the temp pragma says "Do not use this pragma" :)

Kind Regards
Lukas

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-20 Thread Simon Slavin

On 19 Aug 2012, at 6:31pm, lukas  wrote:

> Simon Slavin  writes:
> 
>> Using both versions can you tell us the results of
>> 
>> SELECT sqlite_version(),sqlite_source_id()
> 
> Results are exactly the same for both dlls:
> 
> 3.7.13
> 2012-06-11 02:05:22 f5b5a13f7394dc143aa136f1d4faba6839eaa6dc

That's interesting and useful for use to know.

> The inserts are also in a transaction block. Basically it looks like this:
> 
> begin transaction
> insert ...
> insert...
> update... <- fail
> 
> So the transaction is already started and the db and journal files are opened.
> But it also happens on the simple case:
> 
> begin transaction
> update... <- fail
> 
> It does not make a difference if I use WAL or not, already tried disbling 
> that.

Is there a chance that you simply have an UPDATE command with a bad syntax ?  
You say it works fine on one implementation of 3.7.13 but not the other ?  I'm 
mystified by what you're seeing but I hope someone else can help.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-20 Thread lukas
Simon Slavin  writes:

> Using both versions can you tell us the results of
> 
> SELECT sqlite_version(),sqlite_source_id()

Results are exactly the same for both dlls:

3.7.13
2012-06-11 02:05:22 f5b5a13f7394dc143aa136f1d4faba6839eaa6dc


The inserts are also in a transaction block. Basically it looks like this:

begin transaction
insert ...
insert...
update... <- fail

So the transaction is already started and the db and journal files are opened.
But it also happens on the simple case:

begin transaction
update... <- fail


It does not make a difference if I use WAL or not, already tried disbling that.

Kind Regards
Lukas

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-19 Thread Joe Mistachkin

Lukas Fellechner wrote:
> 
> I tried using the WinRT builds of sqlite. But I find there are problems 
> when using transactions. While my table creation and insert statements
work 
> fine, all transacted "update" statements are failing with result 
> "CannotOpen", error message is "unable to open database file". This even 
> happens when the connection is already open and I have already inserted 
> some lines into the DB. So the file is definately opened and write locked 
> for SQLite. 
> 

Setting the sqlite3_temp_directory to the value contained in the
"Windows.Storage.ApplicationData.Current.TemporaryFolder.Path" property
should clear the issue.  This can be done immediately after opening the
connection using PRAGMA temp_store_directory command on the newly opened
database connection.

--
Joe Mistachkin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction issues with WinRT build

2012-08-19 Thread Simon Slavin

On 19 Aug 2012, at 3:38pm, "Lukas Fellechner"  wrote:

> I tried using the WinRT builds of sqlite. But I find there are problems 
> when using transactions. While my table creation and insert statements work 
> fine, all transacted "update" statements are failing with result 
> "CannotOpen", error message is "unable to open database file". This even 
> happens when the connection is already open and I have already inserted 
> some lines into the DB. So the file is definately opened and write locked 
> for SQLite. 
> 
> If I replace the sqlite3.dll with the "regular" build, then I do not have 
> any issues. And if I remove the "begin transaction", then also everything 
> works fine.

Using both versions can you tell us the results of

SELECT sqlite_version(),sqlite_source_id()

Do you define transactions for your INSERT commands too ?  I can think of some 
reasons to get the error for all transactions.  This might be because although 
your application can access the database file without errors, it has trouble 
with the journal file in the same folder and the 'cannotopen' error is about 
the journal file.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction issues with WinRT build

2012-08-19 Thread Lukas Fellechner
Hello,

I tried using the WinRT builds of sqlite. But I find there are problems 
when using transactions. While my table creation and insert statements work 
fine, all transacted "update" statements are failing with result 
"CannotOpen", error message is "unable to open database file". This even 
happens when the connection is already open and I have already inserted 
some lines into the DB. So the file is definately opened and write locked 
for SQLite. 

If I replace the sqlite3.dll with the "regular" build, then I do not have 
any issues. And if I remove the "begin transaction", then also everything 
works fine.

This happes with the standalone WinRT dll downloads, and also when using 
the Visual Studio Extension (VSIX WinRT package) as reference. I also tried 
to use sqliteadmin with the WinRT sqlite3.dll. It fails to open any 
database with something about "closed data set".

Has anyone tried to use the WinRT version on Windows 8 RC?

Kind Regards
Lukas
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] transaction when storing large and many files

2012-02-27 Thread Christoph P.U. Kukulies


I have a .NET C# application that is using System.Data.SQLite.
The application goes through a filesystem and extracts file names,
builds md5sums on every file
and builds up a database. Reason behind is, to reduce the number of
duplicates.

The files as a tar file sum up to 66 GB. It's about a million files. I
learnt that it is a good idea to do all
in one transaction in SQLite, but I wonder if it is still a good idea
when inserting many files as BLOBs (I decided to put everything into the
database).

My first attempt ran overnight and when I came to the machine next
morning, the SQLITe database file was grown
to (only) 3.4GB and an alert box had popped up saying something like
"bad library use" of System.Data.Sqlite.

Would it be possibly better to commit and close the transaction after
every blob update?

Whether or not I'm inserting a file into the list of resources depends
on the existence of a unique
ref_id which I build from the name_md5sum_size of the file. So, when two
files have the same name, same size
and the same md5sum, I decide they are identical.

When I hit a UNIQUE violation (try ExecuteNonQuery()) I decide not to
UPDATE the record with the BLOB.
If it's a first time entry, I decide to UPDATE the BLOB.

Below is the code. Especially the transaction portion might be wirth
considering.

Thank you.

--
Christoph

using System;
using System.Data;
using System.Data.Common;

using System.Security.Cryptography;
using System.Text;
using System.Windows.Forms;

using System.IO;
using System.Data.SQLite;



namespace sqliteForm
{
 ///
 /// Summary description for Form1
 ///
 /// public class Form1 : System.Windows.Forms.Form
 public partial class Form1 : System.Windows.Forms.Form
 {
 internal System.Windows.Forms.Button btnSearch;
 internal System.Windows.Forms.TextBox txtFile;
 internal System.Windows.Forms.Label lblFile;
 internal System.Windows.Forms.Label lblDirectory;
 internal System.Windows.Forms.ListBox lstFilesFound;
 internal System.Windows.Forms.ComboBox cboDirectory;

 private int totalcount;
 private SQLiteTransaction dbTrans;
 private SQLiteCommand   resource_cmd;
 private SQLiteCommand res_data_cmd;

 private SQLiteCommand   cdcmd;
 private SQLiteCommand   delcmd;

 private SQLiteParameter size;
 private SQLiteParameter creation_date;
 private SQLiteParameter name;
 private SQLiteParameter data;

 private SQLiteParameter md5sum_res;
 private SQLiteParameter md5sum_cd;

 private SQLiteParameter size_cd;
 private SQLiteParameter tag;
 private SQLiteParameter suite;
 private SQLiteParameter ref_id;
 private SQLiteParameter ref_id_cd;



 private SQLiteParameter prob_ref;
 private SQLiteParameter mandant;
 private SQLiteParameter basename;



 public Form1()
 {
 //
 // Required for Windows Form Designer support
 //
 InitializeComponent();

 //
 // TODO: Add any constructor code after InitializeComponent call.
 //
 }

 ///
 /// Clean up any resources being used.
 ///
 protected override void Dispose( bool disposing )
 {
 if( disposing )
 {
 if (components != null)
 {
  components.Dispose();
 }
 }
 base.Dispose( disposing );
 }

 #region Windows Form Designer generated code
 ///
 /// Required method for Designer support: do not modify
 /// the contents of this method with the code editor.
 ///
 private void InitializeComponent()
 {
this.btnSearch = new System.Windows.Forms.Button();
this.txtFile = new System.Windows.Forms.TextBox();
this.lblFile = new System.Windows.Forms.Label();
this.lblDirectory = new System.Windows.Forms.Label();
this.lstFilesFound = new System.Windows.Forms.ListBox();
this.cboDirectory = new System.Windows.Forms.ComboBox();
this.statusStrip1 = new System.Windows.Forms.StatusStrip();
this.toolStripStatusLabel1 = new
System.Windows.Forms.ToolStripStatusLabel();
this.button1 = new System.Windows.Forms.Button();
this.button2 = new System.Windows.Forms.Button();
this.textBox1 = new System.Windows.Forms.TextBox();
this.label1 = new System.Windows.Forms.Label();
this.statusStrip1.SuspendLayout();
this.SuspendLayout();
//
// btnSearch
//
this.btnSearch.Location = new System.Drawing.Point(725, 266);
this.btnSearch.Name = "btnSearch";
this.btnSearch.Size = new System.Drawing.Size(75, 23);
this.btnSearch.TabIndex = 0;
this.btnSearch.Text = "Search";
this.btnSearch.Click += new
System.EventHandler(this.btnSearch_Click);
//
// txtFile
//
this.txtFile.Location = new System.Drawing.Point(380, 432);
this.txtFile.Name = "txtFile";
this.txtFile.Size = new System.Drawing.Size(120, 20);
this.txtFile.TabIndex = 4;
this.txtFile.Text = "*.*";
//
// lblFile
//

Re: [sqlite] Transaction and SQLITE_CANTOPEN

2011-07-14 Thread Jay A. Kreibich
On Wed, Jul 13, 2011 at 03:52:55PM +0200, Romain Lahore-Carraté scratched on 
the wall:
> Hi,
> 
> I'm experiencing a weird issue and I don't know how to solve it.
> First of all, I work on a multi platform middleware written in C language
> that uses SQLite 3.7.5.
> when I'm doing some tests on Win32, everything is OK, but now I have to use
> this middleware on Android (compiled with android NDK 5).
 
> I suppose this shouldn't be a problem at all, since it works on Win32, but
> some sqlite3_step return SQLITE_CANTOPEN.
> 
> I really don't know what to do now. The only thing I can do is to remove
> transactions. Without transactions I have the same behaviour on Win32 and
> Android.  Do I miss something.

  The issue is likely that SQLite can't open a required journal file,
  locking file, or some other temporary file associated with the
  transaction.  Make sure your application has write/create permissions
  to the directory with the database file.  If you cannot provide that,
  put the database in a subdirectory and provide the permissions on the
  subdirectory.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction and SQLITE_CANTOPEN

2011-07-14 Thread Romain Lahore-Carraté
Hi,

I'm experiencing a weird issue and I don't know how to solve it.
First of all, I work on a multi platform middleware written in C language
that uses SQLite 3.7.5.
when I'm doing some tests on Win32, everything is OK, but now I have to use
this middleware on Android (compiled with android NDK 5).

Here is the problem:
The database file is located on a sdcard. I use SQL transaction to minimize
the number of read/write operations.
The problem is that I have to  ADD/UPDATE/DELETE some rows during the
transaction.
I suppose this shouldn't be a problem at all, since it works on Win32, but
some sqlite3_step return SQLITE_CANTOPEN.

I really don't know what to do now. The only thing I can do is to remove
transactions. Without transactions I have the same behaviour on Win32 and
Android.
Do I miss something.

Best regards,
Romain
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-12 Thread Nico Williams
I believe I've got solutions to the various little problems I've run
into.  My experiments have helped me shed some light on what the
semantics of DB triggers should be, to the point where I think I've
reached stable conclusions about those semantics.  I'm also ready to
characterize performance and memory footprint impacts, and to describe
how DB triggers will work.  And I'm ready to solicit opinions and
advice regarding design and implementation as I want to contribute
this code to SQLite3, and I want to make sure to have code that is as
clean as possible -- any feedback will be greatly appreciated.

(Incidentally, I'm doing this work to enable other work in an existing
open source project.)

DB trigger semantics:

 - BEGIN triggers are AFTER BEGIN triggers, and they fire upon (and
before) the first INSERT/UPDATE/DELETE statement in a transaction, on
every INSERT/UPDATE/DELETE executed in auto-commit mode, or a BEGIN
IMMEDIATE, but they fire once and only once per-transaction.  BEGIN
triggers may throw errors, in which case the triggering statement will
fail with no way to override this (i.e. a triggering INSERT OR IGNORE
will not cause a BEGIN trigger exception to be ignored).

 - COMMIT triggers should (and if I can help it, will) come in two
varieties: BEFORE COMMIT and AFTER COMMIT.

   BEFORE COMMIT triggers can cause a commit to fail by RAISE()ing
exceptions, and therefore can run many times, but only once
per-commit, except that in auto-commit mode the triggering statement's
changes will be rolled back.

   AFTER COMMIT triggers cannot cause a commit to fail, and they
execute in their own transaction.  I will only code these if I can
easily make it so that the commit does its work without dropping the
lock so that the AFTER COMMIT trigger can run immediately.  Exceptions
thrown by AFTER COMMIT triggers will only result in their transaction
being rolled back with no effect on the prior transaction, and the
COMMIT statement will not return and error in such cases.

 - ROLLBACK triggers will only come in AFTER ROLLBACK flavor, and will
be very similar to AFTER COMMIT triggers, running after the triggering
event completes, and not being allowed to return errors.

No change to the CONNECT and DISCONNECT trigger semantics I described
earlier, except that I now know how to ensure that DISCONNECT triggers
fire no more than once per-connection.

I believe the above semantics are retrospectively obvious and
reliable, whereas my earlier proposed semantics were unclear and
clearly not finished.

How to use DB triggers:

 - CONNECT triggers are useful for: auditing purposes (of limited
value, of course, since there's no guarantee that a client will run
these triggers), and setup (e.g., load_extension()).  (A SQL function
roughly corresponding to the internal execExecSQL() function might be
useful for automatically setting up TEMP schema.)

 - AFTER BEGIN triggers could be useful for: logging, auditing, and
for demarcating the start of a write transaction.  For example, one
might ensure that there's a single row in a table named "TX", with a
unique (autoincremented) transaction ID to brand all rows of other
tables with, then one might use this to implement DB
synchronization/replication.

 - BEFORE COMMIT triggers should be used primarily for deferred
constraint checking, particularly of constraints that cannot easily be
expressed with existing, natively supported constraints.  If anyone is
interested I can give some elaborate examples of such constraints
based on extensive personal experience with an object-oriented DB back
in the late 90s (that DB is now open source too).

 - AFTER COMMIT triggers could be used for logging, auditing, and to
clean up after AFTER BEGIN triggers (though AFTER BEGIN triggers can
always clean up after the preceding AFTER BEGIN trigger firings, which
makes me think that AFTER COMMIT triggers add little value).

 - AFTER ROLLBACK triggers have very little utility in my view, mostly logging.

 - DISCONNECT triggers also have very little utility IMO, mostly
logging and auditing.

To make this all feasible I'm having to add a few opcodes: to check
whether a begin (or connect) trigger has run for the current
transaction, to mark the begin trigger as having run, an opcode to set
an address to jump to when a rollback exception is thrown and the
rollback completed, an opcode to mark the end of a DB trigger firing
(for housekeeping, otherwise COMMIT triggers cause the commit to fail
due to the OP_AutoCommit instruction thinking there are still active
statements writing in the same transaction), and maybe a couple of
others.

All DB trigger firings, excepting DISCONNECT, will be coded into
compiled statements other than SELECT.  The reason is that any one
prepared statement could be the one to trigger any CONNECT, BEGIN,
COMMIT (think auto-commit mode) and/or ROLLBACK triggers -- it all
depends on the context in which the statement in question is stepped.

PRAGMA statements may get CONNECT 

Re: [sqlite] Transaction triggers?

2011-05-10 Thread Nico Williams
FWIW, I'm making progress.  I've got BEGIN and COMMIT triggers firing,
but there's issues related to auto-commit which imply that I need to
be able to skip -at runtime- the trigger firing that I code with each
OP_Transaction and OP_AutoCommit operation, which I think means I need
a new op, but we'll see.

Also, I've decided to change the syntax so it's AFTER BEGIN and BEFORE COMMIT.

AFTER BEGIN triggers will only fire for RESERVED and IMMEDIATE
transactions, and on the first INSERT/UPDATE/DELETE in a RESERVED
transaction.

BEFORE COMMIT triggers can fire more than once: because the commit
might fail due to, e.g., deferred constraints, and even because the
triggers themselves might RAISE(...).  Speaking of which,
RAISE(IGNORE) in these triggers should be ignored and the BEGIN/COMMIT
proceeds as usual -- I've not tested that yet, but I suspect that one
of RAISE(ABORT/FAIL) or RAISE(IGNORE) in such triggers will not work
as expected without additional code.

I'll also code up BEFORE ROLLBACK trigger firing, but only for
synchronous rollbacks (i.e., when the app executes a ROLLBACK
statement).  I don't think ROLLBACK triggers could be of any use
unless there are functions and/or virtual tables with side-effects
(think logging).

Database connect/disconnect triggers will not fire by default; a new
pragma will be needed to enable them.  Connect triggers will fire when
that pragma is enabled (or if compiled as enabled, then when the DB
handle is opened).  Disconnect triggers will fire immediately upon
entry to sqlite3_close(), and if sqlite3_close fails (e.g., due to
SQLITE_BUSY), their firing may seem spurious, just as with BEFORE
COMMIT triggers.

I've no need for AFTER ROLLBACK/COMMIT triggers (besides, those seem daunting).

My patches so far are small, believe it or not.  (But I spent a lot of
time figuring out how to create the crutch views automatically, and a
lot of time figuring out how to fire the triggers.)  Things work, with
some quirks and some missing functionality, so it's not ready yet.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction speed too slow?

2011-05-09 Thread Nick
Thanks for the suggestion and link. It seems that noatime has already been set 
for both partition types on my test setup. I turned atime back on and re-ran 
the tests. 

The results are within error margins and therefore arguably the same as 
previous.

I'd be interested in the results if others would run the script below.

Thanks in advance
Nick


On 9 May 2011, at 13:31, Black, Michael (IS) wrote:

> You do realize the number they quote is a MAXnot necessarily what you'll 
> get.
> 
> 
> 
> With 16 transactions per second you're writing on transaction every 63ms.  My 
> understanding is that the hard drive in the netbook is 15ms access time so 
> you're seeing 4 accesses per transaction with the way you're doing this test 
> (write caching off, full sync).
> 
> 
> 
> When you write a record you have to update last access times for example.
> 
> 
> 
> 
> 
> Might be interesting to turn off last access updates and see if it changes 
> your results much.
> 
> 
> 
> http://www.cyberciti.biz/faq/linux-noatime-ext3-ext4-fstab-configuration/
> 
> 
> 
> 
> 
> 
> 
> Michael D. Black
> 
> Senior Scientist
> 
> NG Information Systems
> 
> Advanced Analytics Directorate
> 
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Nick [maill...@css-uk.net]
> Sent: Sunday, May 08, 2011 3:52 PM
> To: sqlite-users@sqlite.org
> Subject: EXT :[sqlite] Transaction speed too slow?
> 
> 
> From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
> requires two complete rotations of the disk platter, which on a 7200RPM disk 
> drive limits you to about 60 transactions per second."
> 
> Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
> 5400RPM disk drive I was expecting circa 45 transactions per second. However 
> using the basic python script below I seem to be getting a lot slower 
> results, namely:
> 
> ext3 partition: 16 transactions per second
> jfs partition: 9 transactions per second
> :memory: 15798 transaction per second
> 
> Does anyone have an explanation for this?
> 
> The script used is below. I'd be grateful if people could confirm whether it 
> is just my hardware or a common result
> 
> FULL RESULTS:
> 
> nick@Haribo:~$ sudo hdparm -W 0 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 0 (off)
> write-caching =  0 (off)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000108
> 0:00:00.58
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16155
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.099678
> 0:00:00.121630
> 0:00:00.110672
> 0:00:00.099599
> 0:00:00.110782
> 0:00:00.099542
> 0:00:00.121776
> 0:00:00.099599
> 0:00:00.121794
> 0:00:00.099624
> Time Avg: 0.108470
> Trans/sec Avg: 9
> 
> ---
> 
> nick@Haribo:~$ sudo hdparm -W 1 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 1 (on)
> write-caching =  1 (on)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000113
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16129
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.001438
> 0:00:00.000898
> 0:00:00.000989
> 0:00:00.000984
> 0:00:00.000982
> 0:00:00.001945
> 0:00:00.001059
> 0:00:00.001169
> 0:00:00.000914
> 0:00:00.001063
> Time Avg: 0.001144
> Trans/sec Avg: 874
> 
> 
> --
> 
> SCRIPT
> 
> # Test Benchmark for Transactions speed per second using  built in SQLite
> import sqlite3
> import datetime
> 
> contype = [":memory:", "write-transactions-1.db"]
> 
> for runidx, runtype in enumerate(contype):
> 
># Heading
>print "Run Number: %d, Location: %s" % (runidx + 1, runtype)
> 
>con = sqlite3.connect(runtype)
>con.isolation_level = None
>con.execute("PRAGMA synchronous = FULL")
>times = []
> 
># Create the table
>con.execute("drop table if exists person")
>con.execute("create table person(firstname, lastname)")
> 
># Measure 10 points
>for run in range(10):
> 
># BEGIN transaction
>con.execute

Re: [sqlite] Transaction triggers?

2011-05-09 Thread Nico Williams
I've researched firing of these triggers somewhat.  Upshot: firing
anything other than a transaction commit trigger is tricky.

The semantics of transaction triggers are interesting.  Presumably
such triggers should execute in the same transaction as the one they
are related to -- but this would mean that rollback triggers can't
have any side-effects in the database, which begs the question as to
what their utility might be.  Upshot: I'm going to ignore rollback
triggers for now.  And what happens if a rollback occurs after or
while running commit triggers?  IMO: forget about rollback triggers,
and if a commit trigger's work is undone in a rollback, that's fine.

The semantics of database connect triggers should be simple, but they
aren't: what if the trigger can't execute because of SQLITE_BUSY?
Tricky.  Loop?  On the other hand, if these triggers only execute when
the first write transaction on that db handle begins, then that issue
goes away.  That seems much better.

The semantics of database disconnect triggers is also tricky, mostly
because of how sqlite3_close() works (it can fail with SQLITE_BUSY
part-way through). Scenario 1: a disconnect trigger fires but the db
connection stays alive, in which case we may have to re-fire the
connect trigger again later.  Scenario 2: part of the connection may
be torn down when the disconnect triggers fire (e.g., virtual tables,
which may mean having to re-initialize their state, which means that
vtabs could see spurious connects and disconnects).

Transaction start triggers are tricky because the act of firing them
causes a write transaction to be started, so we can't fire them in a
"BEGIN DEFERRED", but then we need to defer firing the trigger, which
means we need to track whether it has fired, and this needs to be done
such that we can decide whether to code the transaction start trigger
while parsing.

Transaction rollback triggers are even trickier because they can be
completely async.  Coding one one of these such that it gets executed
when the transaction is rolled back and not otherwise is tricky (and
then one has to worry about the trigger itself doing a rollback).  I'd
be tempted to re-enter the VM with sqlite3_prepare/step/finalize in
order to fire this trigger, but I worry that that may break various
assumptions.  And then there's the question: should a rollback trigger
fire inside the transaction that's rolling back?  If so the only side
effects that such a trigger could have would be to invoke user-defined
functions with external side-effects.  If not...  Ah, maybe I could
code these trigger firings as alternatives to OP_Halt, terminated by
OP_Halt, and with their own OP_Transaction so they can execute in
their own transaction.

Upshot: I'm going to focus on transaction commit for now, then
transaction start, and I may stop there.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction speed too slow?

2011-05-09 Thread Black, Michael (IS)
You do realize the number they quote is a MAXnot necessarily what you'll 
get.



With 16 transactions per second you're writing on transaction every 63ms.  My 
understanding is that the hard drive in the netbook is 15ms access time so 
you're seeing 4 accesses per transaction with the way you're doing this test 
(write caching off, full sync).



When you write a record you have to update last access times for example.





Might be interesting to turn off last access updates and see if it changes your 
results much.



http://www.cyberciti.biz/faq/linux-noatime-ext3-ext4-fstab-configuration/







Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Nick [maill...@css-uk.net]
Sent: Sunday, May 08, 2011 3:52 PM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Transaction speed too slow?


>From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
>requires two complete rotations of the disk platter, which on a 7200RPM disk 
>drive limits you to about 60 transactions per second."

Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
5400RPM disk drive I was expecting circa 45 transactions per second. However 
using the basic python script below I seem to be getting a lot slower results, 
namely:

ext3 partition: 16 transactions per second
jfs partition: 9 transactions per second
:memory: 15798 transaction per second

Does anyone have an explanation for this?

The script used is below. I'd be grateful if people could confirm whether it is 
just my hardware or a common result

FULL RESULTS:

nick@Haribo:~$ sudo hdparm -W 0 /dev/sda

/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000108
0:00:00.58
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.57
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16155
Run Number: 2, Location: write-transactions-1.db
0:00:00.099678
0:00:00.121630
0:00:00.110672
0:00:00.099599
0:00:00.110782
0:00:00.099542
0:00:00.121776
0:00:00.099599
0:00:00.121794
0:00:00.099624
Time Avg: 0.108470
Trans/sec Avg: 9

---

nick@Haribo:~$ sudo hdparm -W 1 /dev/sda

/dev/sda:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000113
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16129
Run Number: 2, Location: write-transactions-1.db
0:00:00.001438
0:00:00.000898
0:00:00.000989
0:00:00.000984
0:00:00.000982
0:00:00.001945
0:00:00.001059
0:00:00.001169
0:00:00.000914
0:00:00.001063
Time Avg: 0.001144
Trans/sec Avg: 874


--

SCRIPT

# Test Benchmark for Transactions speed per second using  built in SQLite
import sqlite3
import datetime

contype = [":memory:", "write-transactions-1.db"]

for runidx, runtype in enumerate(contype):

# Heading
print "Run Number: %d, Location: %s" % (runidx + 1, runtype)

con = sqlite3.connect(runtype)
con.isolation_level = None
con.execute("PRAGMA synchronous = FULL")
times = []

# Create the table
con.execute("drop table if exists person")
con.execute("create table person(firstname, lastname)")

# Measure 10 points
for run in range(10):

# BEGIN transaction
con.execute("begin")

# Fill the table
con.execute("insert into person(firstname, lastname) values (?, ?)", 
("alex","bob"))

# Start the clock
begin = datetime.datetime.now()

# COMMIT transaction
con.execute("commit")

# Stop the clock
end = datetime.datetime.now()

# Record the time
times.append(end - begin)

# Measure 10 points
for run in range(10):

# Display the time results
print times[run]

# Print Average
import operator
td = reduce(operator.add, times)
totalresultsec = (float(td.microseconds) + (td.seconds + td.days * 24 * 
3600) * 10**6) / 10**6
avgsec = totalresultsec / 10.0
print "Time Avg: %f" % avgsec
print "Trans/sec Avg: %0.0f" % (1.0/avgsec)
con.close()

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Nico Williams
The reason to delay connection trigger firing to the first non-pragma
statement would be to allow one to enable or disable db triggers.  DB
triggers should also be disabled by default, and ahould have a separate set
of pragmas to enable or disable them.

The main utility of connect triggers is to load extensions needed by, e.g.,
views and triggers, to create temp tables and triggers, and so on.  That's
too powerful to permit in untrusted DBs, which is why connection triggers
must be disabled by default.  The shell could warn of their presence too...

Regarding savepoint and rollback to a savepoint, I'll have to think about
that some more.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Nico Williams
As for attached DBs, each DB gets its own db triggers.  DB connect trigger
firing should be about the same (first non-pragma statement affecting the
attached db).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Nico Williams
On May 8, 2011 4:14 PM, "Roger Binns"  wrote:

> C) What about SAVEPOINT?

Sounds useful...  I should add that too.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Nico Williams
Indeed, I have been thinking about when database connect fires.  My current
thought is: on the first non-pragma statement executed (not prepared), not
at db open time. I only care about commit, really, but if I can I'll do the
others too.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 05/08/2011 01:46 PM, Simon Slavin wrote:
> Just asking to warn you it's tricky ...
> 
> A) When do you consider that a transaction starts ?
> B) How do you deal with ATTACHed databases ?

C) What about SAVEPOINT?

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk3HB8YACgkQmOOfHg372QQtdQCg2lP7iP6DaznL8W6oCtMDOyKW
IkwAoOLhnFscg4RMCKpebduLUdN885vv
=T42x
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction speed too slow?

2011-05-08 Thread Nick

>From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
>requires two complete rotations of the disk platter, which on a 7200RPM disk 
>drive limits you to about 60 transactions per second." 

Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
5400RPM disk drive I was expecting circa 45 transactions per second. However 
using the basic python script below I seem to be getting a lot slower results, 
namely:

ext3 partition: 16 transactions per second
jfs partition: 9 transactions per second
:memory: 15798 transaction per second

Does anyone have an explanation for this?

The script used is below. I'd be grateful if people could confirm whether it is 
just my hardware or a common result

FULL RESULTS:

nick@Haribo:~$ sudo hdparm -W 0 /dev/sda

/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)
nick@Haribo:~$ python write-transactions-1.py 
Run Number: 1, Location: :memory:
0:00:00.000108
0:00:00.58
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.57
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16155
Run Number: 2, Location: write-transactions-1.db
0:00:00.099678
0:00:00.121630
0:00:00.110672
0:00:00.099599
0:00:00.110782
0:00:00.099542
0:00:00.121776
0:00:00.099599
0:00:00.121794
0:00:00.099624
Time Avg: 0.108470
Trans/sec Avg: 9

---

nick@Haribo:~$ sudo hdparm -W 1 /dev/sda

/dev/sda:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)
nick@Haribo:~$ python write-transactions-1.py 
Run Number: 1, Location: :memory:
0:00:00.000113
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16129
Run Number: 2, Location: write-transactions-1.db
0:00:00.001438
0:00:00.000898
0:00:00.000989
0:00:00.000984
0:00:00.000982
0:00:00.001945
0:00:00.001059
0:00:00.001169
0:00:00.000914
0:00:00.001063
Time Avg: 0.001144
Trans/sec Avg: 874


--

SCRIPT

# Test Benchmark for Transactions speed per second using  built in SQLite
import sqlite3
import datetime

contype = [":memory:", "write-transactions-1.db"]

for runidx, runtype in enumerate(contype):

# Heading
print "Run Number: %d, Location: %s" % (runidx + 1, runtype)

con = sqlite3.connect(runtype)
con.isolation_level = None
con.execute("PRAGMA synchronous = FULL")
times = []

# Create the table
con.execute("drop table if exists person")
con.execute("create table person(firstname, lastname)")

# Measure 10 points
for run in range(10):

# BEGIN transaction
con.execute("begin")

# Fill the table
con.execute("insert into person(firstname, lastname) values (?, ?)", 
("alex","bob"))

# Start the clock
begin = datetime.datetime.now()

# COMMIT transaction
con.execute("commit")

# Stop the clock
end = datetime.datetime.now()

# Record the time
times.append(end - begin)

# Measure 10 points
for run in range(10):

# Display the time results
print times[run]

# Print Average
import operator
td = reduce(operator.add, times)
totalresultsec = (float(td.microseconds) + (td.seconds + td.days * 24 * 
3600) * 10**6) / 10**6
avgsec = totalresultsec / 10.0
print "Time Avg: %f" % avgsec
print "Trans/sec Avg: %0.0f" % (1.0/avgsec)
con.close()

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Simon Slavin

On 8 May 2011, at 9:42pm, Nico Williams wrote:

> CREATE TRIGGER ON  BEGIN ... END;
> 
> where  is one of DATABASE CONNECT, DATABASE DISCONNECT,
> TRANSACTION START, TRANSACTION COMMIT, and TRANSACTION ROLLBACK.

Just asking to warn you it's tricky ...

A) When do you consider that a transaction starts ?
B) How do you deal with ATTACHed databases ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-08 Thread Nico Williams
FWIW, I have parsing, automatic creation of the crutch views, and
creation of the DB triggers working.  Next up: firing of DB triggers.
The changes so far are fairly trivial, adding very few branches, which
means that writing tests for them should be relatively simple too.
That said, it's taken quite a few hours to figure it all out, and I'm
sure firing the triggers too will be hard to get right but simple
looking in the end.

Hopefully I can have something worth submitting to the SQLite3 dev
community soon, and hopefully there will be interest.  If not, I'll
see if I can get the open source project I'm doing this for to accept
keeping patches to SQLite3.

sqlite> select * from sqlite_master;
sqlite> create trigger main.test2 on database disconnect begin select
raise(abort, 'Hey, this works!'); end;sqlite> select * from
sqlite_master;
view|sqlite_intview_disconnect|sqlite_intview_disconnect|0|CREATE VIEW
sqlite_intview_disconnect AS SELECT 0 AS nothing
trigger|test2|sqlite_intview_disconnect|0|CREATE TRIGGER test2 on
database disconnect begin select raise(abort, 'Hey, this works!'); end
sqlite> insert into sqlite_intview_disconnect values (1);
Error: Hey, this works!
sqlite>

The syntax I have a parser for is:

CREATE TRIGGER ON  BEGIN ... END;

where  is one of DATABASE CONNECT, DATABASE DISCONNECT,
TRANSACTION START, TRANSACTION COMMIT, and TRANSACTION ROLLBACK.

Cheers,

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-06 Thread Nico Williams
It took just a few minutes to start on that approach.  I have the
necessary parse.y changes done and know what to do for the rest.  Time
to subscribe to sqlite-dev...
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-06 Thread Nico Williams
BTW, I might implement and contribute something like this.  I'm free
to contribute any changes to SQLite3 that I make.  I'd need some
guidance though.  And what I cannot contribute is the high degree of
testing that SQLite3 is known for.

The parsing aspect of database triggers is simple enough.  But there's
a lot of little details.

The simplest way to avoid most of those details would be to pretend
(but not document!) that database triggers are like INSTEAD OF INSERT
triggers on a view named sqlite_, so that the
existing code infrastructure for ROW triggers can be used without
modification.  Then at the various events act as though an insert has
been done into that view.  This approach would also simplify much of
the testing (since existing testing would already be reaching every
branch in the ROW trigger code).

Would such a contribution be welcomed?

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-06 Thread Nico Williams
On Fri, May 6, 2011 at 4:30 PM, Simon Slavin  wrote:
> On 6 May 2011, at 10:14pm, Nico Williams wrote:
>> Here's what I need:
>>
>> - at transaction time I need a way to record somewhere that the
>> transaction did start.  This would mostly be an insert into a table
>> with an INTEGER PRIMARY KEY AUTOINCREMENT column, and a time of
>> transaction start.  I can do without this by simply doing an insert
>> into that table if the relevant row didn't already exist.
>
> You can do an 'INSERT OR IGNORE'.

And INSERT INTO ... SELECT ... WHERE .  I know this -- it's what I mean above :)

>> - at transaction commit time I need to be able to RAISE() exceptions
>> if the concluding transaction does not meet certain requirements.
>> I.e., SELECT RAISE(ROLLBACK, 'Error: user did not x, y, or z') WHERE
>> .  I have no way to detect end of transaction
>> time, so I can't really do without this :(
>>
>> I'd also like to be able to do inserts/updates/deletes at transaction
>> commit time, as if the application's COMMIT has been a savepoint, but
>> I could live without this capability.
>
> You may be subverting the way SQL works.  The alternative kind of trigger to 
> ROW is not TRANSACTION, it's STATEMENT.  One can use several statements in a 
> TRANSACTION.

Well, D.R. Hipp at least at one point disagreed with you, since he
himself proposed something like this:

http://osdir.com/ml/db.sqlite.general/2003-04/msg00137.html

Moreover, other SQL RDBMes support transaction triggers.  See, for example:

http://en.wikipedia.org/wiki/Database_trigger

Finally, arguably triggers alone violate the SQL concept of a
declarative language.  I've shown here before how one can do quite a
bit of procedural programming using triggers in SQLite3.  So if it's
SQL purity you want, well, it's too late already :)  I grant you that
one might not want to further depart from the pure than one already
has, but that's a different argument than "you're leaving the pure
behind".

> I suspect your easiest way forward would be to implement that code inside 
> your application: instead of calling "BEGIN" and "COMMIT" make your own 
> routines for those two things.  Another way would be to take apart SQLite and 
> rewrite some of the code in ways that suit this one particular application.

One of the very nice things about SQLite3 is the availability of the
shell, and the fact that anyone can download SQLite3 and link it into
any application.

In the particular application I'm building I'd like to be able to
encode as much as possible of the business logic into the schema so
that I don't have to forbid direct access to the DB by any programs
other than those I write.  The reason for this is that I very much
foresee third parties wanting to write their own tools to manipulate
my application's database.  Moreover, I don't want to write all those
tools, as that's not what I'm getting paid to do -- enabling third
parties here is of great utility to me.  I believe that would be a
great feature for my application to have.  All I need is a way to run
some SELECT statements at COMMIT time, with those SELECTs possibly
RAISE()ing exceptions.  I could use more functionality too, but that's
the bare minimum I need.

I can detect transaction start, but not transaction end.  I need to
detect transaction end...

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-06 Thread Simon Slavin

On 6 May 2011, at 10:14pm, Nico Williams wrote:

> Here's what I need:
> 
> - at transaction time I need a way to record somewhere that the
> transaction did start.  This would mostly be an insert into a table
> with an INTEGER PRIMARY KEY AUTOINCREMENT column, and a time of
> transaction start.  I can do without this by simply doing an insert
> into that table if the relevant row didn't already exist.

You can do an 'INSERT OR IGNORE'.

> - at transaction commit time I need to be able to RAISE() exceptions
> if the concluding transaction does not meet certain requirements.
> I.e., SELECT RAISE(ROLLBACK, 'Error: user did not x, y, or z') WHERE
> .  I have no way to detect end of transaction
> time, so I can't really do without this :(
> 
> I'd also like to be able to do inserts/updates/deletes at transaction
> commit time, as if the application's COMMIT has been a savepoint, but
> I could live without this capability.

You may be subverting the way SQL works.  The alternative kind of trigger to 
ROW is not TRANSACTION, it's STATEMENT.  One can use several statements in a 
TRANSACTION.

I suspect your easiest way forward would be to implement that code inside your 
application: instead of calling "BEGIN" and "COMMIT" make your own routines for 
those two things.  Another way would be to take apart SQLite and rewrite some 
of the code in ways that suit this one particular application.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction triggers?

2011-05-06 Thread Nico Williams
I really, really need transaction, not row, triggers.  There's been
discussion of those here in the past (IIRC D.R. Hipp had a proposal
once).

Here's what I need:

 - at transaction time I need a way to record somewhere that the
transaction did start.  This would mostly be an insert into a table
with an INTEGER PRIMARY KEY AUTOINCREMENT column, and a time of
transaction start.  I can do without this by simply doing an insert
into that table if the relevant row didn't already exist.

 - at transaction commit time I need to be able to RAISE() exceptions
if the concluding transaction does not meet certain requirements.
I.e., SELECT RAISE(ROLLBACK, 'Error: user did not x, y, or z') WHERE
.  I have no way to detect end of transaction
time, so I can't really do without this :(

I'd also like to be able to do inserts/updates/deletes at transaction
commit time, as if the application's COMMIT has been a savepoint, but
I could live without this capability.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "transaction" method of the Tcl bindings to SQLite

2011-03-24 Thread Dan Kennedy
On 03/24/2011 07:23 PM, Richard Hipp wrote:
> On Wed, Mar 23, 2011 at 7:05 AM, Jean-Marie CUAZ  wrote:
>
>> Hello,
>>
>> In a Tcl script, is it safe to call the Tcl "_return_" command inside
>> and before the end of a multi-statement "_transaction_" method (for
>> aborting the Tcl procedure in case of Sql/application error) ?
>>
>> In other words is the transaction handled at the SQLite level "closed"
>> safely (and the statements executed before the "return" invocation be
>> rolled back automatically) ?
>>
>
> Yes.

Clarification: The transaction will be closed correctly in all cases.
But the statements will only be rolled back if the script returns
TCL_ERROR (i.e. [return -code error]).

Any other error code (ok, break or continue) closes the transaction
but commits the statements.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "transaction" method of the Tcl bindings to SQLite

2011-03-24 Thread Richard Hipp
On Wed, Mar 23, 2011 at 7:05 AM, Jean-Marie CUAZ  wrote:

> Hello,
>
> In a Tcl script, is it safe to call the Tcl "_return_" command inside
> and before the end of a multi-statement "_transaction_" method (for
> aborting the Tcl procedure in case of Sql/application error) ?
>
> In other words is the transaction handled at the SQLite level "closed"
> safely (and the statements executed before the "return" invocation be
> rolled back automatically) ?
>

Yes.


>
> Many thanks
>
> Jean-Marie
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "transaction" method of the Tcl bindings to SQLite

2011-03-24 Thread Jean-Marie CUAZ
Hello,

In a Tcl script, is it safe to call the Tcl "_return_" command inside 
and before the end of a multi-statement "_transaction_" method (for 
aborting the Tcl procedure in case of Sql/application error) ?

In other words is the transaction handled at the SQLite level "closed" 
safely (and the statements executed before the "return" invocation be 
rolled back automatically) ?

Many thanks

Jean-Marie
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction Size

2010-11-14 Thread David Peters
Hi there,

Could someone point me to the documentation that deals with optimizations of
blocks of statements. I have a couple of databases of about 64Mbytes size
and I want to understand how to optimize the processing of large numbers of
statements. For example there are times when I need to execute almost 1
million statements (inserts and updates) in a single block. I am currently
executing blocks of 50,000 statements in a single transaction but that value
may be too low.

Thank you very much.

Regards,
David
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> Actually, this is only because Python 3 str is Python 2 unicode.  Python 
>> 2 (which I'm currently using, and which I believe most of the world is 
>> using) str is a physical string of bytes, not a logical/decoded 
>> character string.  Python 2.6 introduces bytes as a synonym for str, but 
>> I am using Python 2.5 at the moment.
> 
> This is all pedantically true, but it is still a really bad way to
> structure your program?  Did you read the Joel Unicode and character
> sets link?

I *have* in fact read that article a very, very long time ago, but that 
is besides the point.  I am aware of character encoding issues, thanks.

> 
> It was because Python 2 messed up on bytes versus strings versus unicode
> that they had to clean it up in Python 3.  It is also why the SQLite
> wrappers in Python 2 return blobs as the buffer type so that there is no
> accidental mingling of bytes and strings.  (Disclosure: I am the author
> of the APSW wrapper)  SQLite *only* supports Unicode strings.  Other
> databases do support non-Unicode strings, character set conversions and
> all that other complexity.

I require str because that is what cPickle.loads() requires; you cannot 
pass it a buffer.  I need to store pickled objects in the database 
because I am implementing an SQLite backend for the Python 2 shelve module.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yang Zhang wrote:
> Actually, this is only because Python 3 str is Python 2 unicode.  Python 
> 2 (which I'm currently using, and which I believe most of the world is 
> using) str is a physical string of bytes, not a logical/decoded 
> character string.  Python 2.6 introduces bytes as a synonym for str, but 
> I am using Python 2.5 at the moment.

This is all pedantically true, but it is still a really bad way to
structure your program?  Did you read the Joel Unicode and character
sets link?

It was because Python 2 messed up on bytes versus strings versus unicode
that they had to clean it up in Python 3.  It is also why the SQLite
wrappers in Python 2 return blobs as the buffer type so that there is no
accidental mingling of bytes and strings.  (Disclosure: I am the author
of the APSW wrapper)  SQLite *only* supports Unicode strings.  Other
databases do support non-Unicode strings, character set conversions and
all that other complexity.

It is your code and you can do whatever pleases you.  However the advice
still stands - keep your strings and bytes/blobs separate, and using the
buffer type in Python 2 (and bytes in Python 3) is an excellent way of
doing that.  The wrappers already do this because it is good practise.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkoR10IACgkQmOOfHg372QQyjwCfXTwHMBsdAznHfDZ8CeaQIGNH
T64Anj5qvy6MjjL/K08xi5CPY7pxueEi
=Zet9
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Igor Tandetnik wrote:
> Yang Zhang  wrote:
>> Pavel Ivanov wrote:
>>> BTW, ACID that you mentioned has nothing to do with snapshot
>>> isolation that you want to achieve. AFAIK only Oracle supports this
>>> kind of statement isolation level.
>> Actually, Oracle, Postgresql, SQL Server, Firebird, and others support
>> snapshot isolation.  http://en.wikipedia.org/wiki/Snapshot_isolation
> 
> ... but not between two statements running within _the same 
> transaction_. Isolation level (snapshot or otherwise) describes how two 
> transactions are isolated from each other. In your example, you only 
> have one transaction, so any discussion of isolation levels is moot.

Right, I mean the whole reason why I originally wrote to this list was 
because I was under the (incorrect) impression that I was working with 
two separate transactions.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Igor Tandetnik
Yang Zhang  wrote:
> Pavel Ivanov wrote:
>> BTW, ACID that you mentioned has nothing to do with snapshot
>> isolation that you want to achieve. AFAIK only Oracle supports this
>> kind of statement isolation level.
>
> Actually, Oracle, Postgresql, SQL Server, Firebird, and others support
> snapshot isolation.  http://en.wikipedia.org/wiki/Snapshot_isolation

... but not between two statements running within _the same 
transaction_. Isolation level (snapshot or otherwise) describes how two 
transactions are isolated from each other. In your example, you only 
have one transaction, so any discussion of isolation levels is moot.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Pavel Ivanov wrote:
> BTW, ACID that you mentioned has nothing to do with snapshot isolation
> that you want to achieve. AFAIK only Oracle supports this kind of
> statement isolation level.

Actually, Oracle, Postgresql, SQL Server, Firebird, and others support 
snapshot isolation.  http://en.wikipedia.org/wiki/Snapshot_isolation

And I certainly hope I did not convey that ACID implies snapshot isolation.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> I copied and pasted this code straight from my actual application, which 
>> uses blobs instead of integers, which I need to convert into strings 
>> (since Python interfaces with blobs using the `buffer` type, not `str`).
> 
> And for very good reason.  Blobs are buckets of bytes and those are not
> strings.  In your example there was no need to do the conversion since
> you can supply buffers as values too.  (In Python 3 the bytes type is used.)

Actually, this is only because Python 3 str is Python 2 unicode.  Python 
2 (which I'm currently using, and which I believe most of the world is 
using) str is a physical string of bytes, not a logical/decoded 
character string.  Python 2.6 introduces bytes as a synonym for str, but 
I am using Python 2.5 at the moment.

 From http://mail.python.org/pipermail/python-list/2009-January/696449.html:

> In Python 2.x, str means "string of bytes". This has been renamed "bytes" 
> in Python 3.
> 
> In Python 2.x, unicode means "string of characters". This has been 
> renamed "str" in Python 3.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread John Elrick
D. Richard Hipp wrote:
> On May 18, 2009, at 1:13 PM, John Elrick wrote:
>
>   
>> John Elrick wrote:
>> 
>>> SNIP
>>>
>>>   
>>>>> I say this because your example implies that the Python wrapper  
>>>>> starts
>>>>> the transaction automatically inside the execute, and I would not  
>>>>> be
>>>>> surprised if it did so BEFORE executing the SQL parameter.
>>>>>
>>>>>
>>>>>   
>>>> The cursor() method that I call on the conn for the SELECT should  
>>>> give
>>>> me a separate transaction.
>>>>
>>>> 
>>> Are you certain the wrapper is behaving that way?  As an experiment I
>>> altered my Ruby example to try to force it to go into an endless loop
>>> and failed (see below).  My experiments seem to confirm that Sqlite  
>>> is
>>> behaving as you expect, perhaps it is the wrapper which is not?
>>>
>>>   
>> Attempting this closer to the metal, it appears I was mistaken.  A
>> select from outside a transaction does indeed have visibility to rows
>> added inside the transaction.  I would not have expected this  
>> either, Yang.
>> 
>
>
> Double-check your findings, please John.
>
> The changes within an SQLite transaction are not visible to other  
> database connections until the transaction commits.  (However, they  
> are visible within the same database connection.)
>
> An exception to the previous paragraph is if you are using shared  
> cache mode and you do a PRAGMA read_uncommitted=ON;
>   

If I am understanding you correctly, we are saying the same things.  I 
have a single connection.  I am doing the following in order:

prepare select
start transaction
step select
replace...
loop until no more in select (in practice never terminates)
commit

The code is in Delphi (custom wrapper, heavily unit tested), so I 
started creating a version truly using bare metal calls, until I noticed 
your caveat "...changes within...not visible to other database 
CONNECTIONS...".

 From Yang's original code description he was attempting this within a 
single connection as is my test code.

Am I misunderstanding or is the above Working As Designed?  If I am 
misunderstanding and the above should terminate I will continue creating 
a test version using all direct calls to verify the result, as that 
would imply a bug in our Delphi wrapper.

Thanks,



John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yang Zhang wrote:
> I copied and pasted this code straight from my actual application, which 
> uses blobs instead of integers, which I need to convert into strings 
> (since Python interfaces with blobs using the `buffer` type, not `str`).

And for very good reason.  Blobs are buckets of bytes and those are not
strings.  In your example there was no need to do the conversion since
you can supply buffers as values too.  (In Python 3 the bytes type is used.)

Converting blobs to str is asking for future problems.  For example
SQLite does not verify that a string value is in fact legal UTF8.  Other
developers have played fast and loose ending up supplying a str when
they really meant a buffer/blob which then ended up in the database as a
string, but was invalid UTF8 when read back and causes an exception.

I also strongly recommend reading this article:

  http://www.joelonsoftware.com/articles/Unicode.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkoRoFUACgkQmOOfHg372QSIagCg5nllDS/Q5hsgfc+WzjS94Ubk
5jAAoONyqlKsczc1f+q01JqR6/Ysih2q
=TODs
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Elrick wrote:
> Are you certain the wrapper is behaving that way?  

The pysqlite wrapper (available as a sqlite3 Python module) by default
parses your SQL and starts and ends transactions behind your back, in
order to make it look like SQLite complies with some (IMHO pointless)
part of the Python DBAPI spec.

Fortunately it can be turned off which is what the isolation level stuff
in the docs are all about.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkoRnrIACgkQmOOfHg372QRhMwCeI52EqZ2/MR8AZK+/TPrF1nJt
RMAAn3P3MJ9J6hZHsZCzUQju1nbHr+aU
=Gnpl
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread D. Richard Hipp

On May 18, 2009, at 1:13 PM, John Elrick wrote:

> John Elrick wrote:
>> SNIP
>>
>>>> I say this because your example implies that the Python wrapper  
>>>> starts
>>>> the transaction automatically inside the execute, and I would not  
>>>> be
>>>> surprised if it did so BEFORE executing the SQL parameter.
>>>>
>>>>
>>> The cursor() method that I call on the conn for the SELECT should  
>>> give
>>> me a separate transaction.
>>>
>>
>> Are you certain the wrapper is behaving that way?  As an experiment I
>> altered my Ruby example to try to force it to go into an endless loop
>> and failed (see below).  My experiments seem to confirm that Sqlite  
>> is
>> behaving as you expect, perhaps it is the wrapper which is not?
>>
>
> Attempting this closer to the metal, it appears I was mistaken.  A
> select from outside a transaction does indeed have visibility to rows
> added inside the transaction.  I would not have expected this  
> either, Yang.


Double-check your findings, please John.

The changes within an SQLite transaction are not visible to other  
database connections until the transaction commits.  (However, they  
are visible within the same database connection.)

An exception to the previous paragraph is if you are using shared  
cache mode and you do a PRAGMA read_uncommitted=ON;

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread John Elrick
John Elrick wrote:
> SNIP
>   
>>> I say this because your example implies that the Python wrapper starts 
>>> the transaction automatically inside the execute, and I would not be 
>>> surprised if it did so BEFORE executing the SQL parameter.
>>> 
>>>   
>> The cursor() method that I call on the conn for the SELECT should give 
>> me a separate transaction.
>> 
>
> Are you certain the wrapper is behaving that way?  As an experiment I 
> altered my Ruby example to try to force it to go into an endless loop 
> and failed (see below).  My experiments seem to confirm that Sqlite is 
> behaving as you expect, perhaps it is the wrapper which is not?
>   

Attempting this closer to the metal, it appears I was mistaken.  A 
select from outside a transaction does indeed have visibility to rows 
added inside the transaction.  I would not have expected this either, Yang.


John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread John Elrick
SNIP
>> I say this because your example implies that the Python wrapper starts 
>> the transaction automatically inside the execute, and I would not be 
>> surprised if it did so BEFORE executing the SQL parameter.
>> 
>
> The cursor() method that I call on the conn for the SELECT should give 
> me a separate transaction.

Are you certain the wrapper is behaving that way?  As an experiment I 
altered my Ruby example to try to force it to go into an endless loop 
and failed (see below).  My experiments seem to confirm that Sqlite is 
behaving as you expect, perhaps it is the wrapper which is not?


John

-
require 'sqlite3'
require 'erb'

db = SQLite3::Database.new(':memory:')

db.execute_batch(ERB.new(<').result(binding))
begin transaction;
create table shelf (
  key integer not null,
  value integer not null);

insert into shelf values (1, 1);
insert into shelf values (2, 2);
insert into shelf values (3, 3);

commit;
eof


SAFETY = 10
count = 0
db.transaction {
  db.execute('select key from shelf order by rowid').each do |i|
db.execute('replace into shelf (key, value) values(?,?)', i, i)
count += 1
abort if count == SAFETY
  end
}

puts db.execute('select * from shelf')
puts 'done'


1
1
2
2
3
3
1
1
2
2
3
3
done

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Pavel Ivanov
SQLite doesn't support several simultaneous transactions on the same
connection to database. So in fact your select and insert statements
execute in the same transaction. And even more: your inserts are not
committed until your select is completely finished. So for your task
you should use different connections for select and inserts. But it
will not work anyway, because SQLite doesn't support row-level
locking, it locks entire database. Thus your select statement on one
connection will block execution of inserts on another connection. So
you should stick with one of the solutions mentioned or switch to
another database engine that will fit you better.

BTW, ACID that you mentioned has nothing to do with snapshot isolation
that you want to achieve. AFAIK only Oracle supports this kind of
statement isolation level.


Pavel

On Mon, May 18, 2009 at 12:41 PM, Yang Zhang  wrote:
> John Elrick wrote:
>> Yang Zhang wrote:
>>> Roger Binns wrote:
>>>
 Yang Zhang wrote:

> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM
> shelf ORDER BY ROWID')):
>
 You are converting the key which is an integer into a string for no
 apparent reason.

>>> I copied and pasted this code straight from my actual application, which
>>> uses blobs instead of integers, which I need to convert into strings
>>> (since Python interfaces with blobs using the `buffer` type, not `str`).
>>>
>>>
 If you also ask for the ROWID you will see that what is happening is a
 new rowid is generated for the replaced row so that if you are iterating
 over the table while modifying it then you effectively have an infinite
 length table.

>>> This is unusual for a RDBMS that claims to provide ACID properties - in
>>> particular, this is not even as strong an isolation level as snapshot
>>> isolation, as a reader transaction is able to see a concurrent writer
>>> transaction's effects.  In fact, this is weaker than the weakest
>>> isolation level in (say) Postgresql, which is READ COMMITTED (in which
>>> any statement is guaranteed to not see the effects of a transaction that
>>> is committed after the query has started execution).
>>
>> As I am not an expert in the Python wrapper, I could be incorrect;
>> however, your code as written appears to be equivalent to the following:
>>
>> begin transaction
>> for select(
>>   insert stuff
>> end
>> commit
>>
>> rather than your intended:
>>
>> s = select(...
>> begin transaction
>> for s...
>>   insert stuff
>> end
>> commit
>>
>> I say this because your example implies that the Python wrapper starts
>> the transaction automatically inside the execute, and I would not be
>> surprised if it did so BEFORE executing the SQL parameter.
>
> The cursor() method that I call on the conn for the SELECT should give
> me a separate transaction.
> --
> Yang Zhang
> http://www.mit.edu/~y_z/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Yang Zhang wrote:
> John Elrick wrote:
>> Yang Zhang wrote:
>>> Roger Binns wrote:
>>>  
 Yang Zhang wrote:

> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key 
> FROM shelf ORDER BY ROWID')):
>   
 You are converting the key which is an integer into a string for no
 apparent reason.
 
>>> I copied and pasted this code straight from my actual application, 
>>> which uses blobs instead of integers, which I need to convert into 
>>> strings (since Python interfaces with blobs using the `buffer` type, 
>>> not `str`).
>>>
>>>  
 If you also ask for the ROWID you will see that what is happening is a
 new rowid is generated for the replaced row so that if you are 
 iterating
 over the table while modifying it then you effectively have an infinite
 length table.
 
>>> This is unusual for a RDBMS that claims to provide ACID properties - 
>>> in particular, this is not even as strong an isolation level as 
>>> snapshot isolation, as a reader transaction is able to see a 
>>> concurrent writer transaction's effects.  In fact, this is weaker 
>>> than the weakest isolation level in (say) Postgresql, which is READ 
>>> COMMITTED (in which any statement is guaranteed to not see the 
>>> effects of a transaction that is committed after the query has 
>>> started execution).
>>
>> As I am not an expert in the Python wrapper, I could be incorrect; 
>> however, your code as written appears to be equivalent to the following:
>>
>> begin transaction
>> for select(
>>   insert stuff
>> end
>> commit
>>
>> rather than your intended:
>>
>> s = select(...
>> begin transaction
>> for s...
>>   insert stuff
>> end
>> commit
>>
>> I say this because your example implies that the Python wrapper starts 
>> the transaction automatically inside the execute, and I would not be 
>> surprised if it did so BEFORE executing the SQL parameter.
> 
> The cursor() method that I call on the conn for the SELECT should give 
> me a separate transaction.

I also tried using separate connections, but that just ends up blocking 
and failing with a timeout on the lock acquisition because it appears 
that SQLite only has full-table locking, and not MVCC/snapshot 
isolation.  Do I need to manually extract out all the data first into 
another store, and then iterate over that to operate on original database?
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
John Elrick wrote:
> Yang Zhang wrote:
>> Roger Binns wrote:
>>   
>>> Yang Zhang wrote:
>>> 
 for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
 shelf ORDER BY ROWID')):
   
>>> You are converting the key which is an integer into a string for no
>>> apparent reason.
>>> 
>> I copied and pasted this code straight from my actual application, which 
>> uses blobs instead of integers, which I need to convert into strings 
>> (since Python interfaces with blobs using the `buffer` type, not `str`).
>>
>>   
>>> If you also ask for the ROWID you will see that what is happening is a
>>> new rowid is generated for the replaced row so that if you are iterating
>>> over the table while modifying it then you effectively have an infinite
>>> length table.
>>> 
>> This is unusual for a RDBMS that claims to provide ACID properties - in 
>> particular, this is not even as strong an isolation level as snapshot 
>> isolation, as a reader transaction is able to see a concurrent writer 
>> transaction's effects.  In fact, this is weaker than the weakest 
>> isolation level in (say) Postgresql, which is READ COMMITTED (in which 
>> any statement is guaranteed to not see the effects of a transaction that 
>> is committed after the query has started execution).
> 
> As I am not an expert in the Python wrapper, I could be incorrect; 
> however, your code as written appears to be equivalent to the following:
> 
> begin transaction
> for select(
>   insert stuff
> end
> commit
> 
> rather than your intended:
> 
> s = select(...
> begin transaction
> for s...
>   insert stuff
> end
> commit
> 
> I say this because your example implies that the Python wrapper starts 
> the transaction automatically inside the execute, and I would not be 
> surprised if it did so BEFORE executing the SQL parameter.

The cursor() method that I call on the conn for the SELECT should give 
me a separate transaction.
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread John Elrick
Yang Zhang wrote:
> Roger Binns wrote:
>   
>> Yang Zhang wrote:
>> 
>>> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
>>> shelf ORDER BY ROWID')):
>>>   
>> You are converting the key which is an integer into a string for no
>> apparent reason.
>> 
>
> I copied and pasted this code straight from my actual application, which 
> uses blobs instead of integers, which I need to convert into strings 
> (since Python interfaces with blobs using the `buffer` type, not `str`).
>
>   
>> If you also ask for the ROWID you will see that what is happening is a
>> new rowid is generated for the replaced row so that if you are iterating
>> over the table while modifying it then you effectively have an infinite
>> length table.
>> 
>
> This is unusual for a RDBMS that claims to provide ACID properties - in 
> particular, this is not even as strong an isolation level as snapshot 
> isolation, as a reader transaction is able to see a concurrent writer 
> transaction's effects.  In fact, this is weaker than the weakest 
> isolation level in (say) Postgresql, which is READ COMMITTED (in which 
> any statement is guaranteed to not see the effects of a transaction that 
> is committed after the query has started execution).

As I am not an expert in the Python wrapper, I could be incorrect; 
however, your code as written appears to be equivalent to the following:

begin transaction
for select(
  insert stuff
end
commit

rather than your intended:

s = select(...
begin transaction
for s...
  insert stuff
end
commit

I say this because your example implies that the Python wrapper starts 
the transaction automatically inside the execute, and I would not be 
surprised if it did so BEFORE executing the SQL parameter.

In other words, you seem to be doing a select from INSIDE the 
transaction, not OUTSIDE.  If I understand correctly, that should indeed 
make the altered table visible to the select.  I believe your intention 
can be expressed with this example in Ruby (corrected to have a primary 
key as Igor noted):


require 'sqlite3'
require 'erb'

db = SQLite3::Database.new(':memory:')

db.execute_batch(ERB.new(<').result(binding))
begin transaction;
create table shelf (
  key integer primary key autoincrement,
  value integer not null);

insert into shelf values (1, 1);
insert into shelf values (2, 2);
insert into shelf values (3, 3);

commit;
eof

sel = db.execute('select key from shelf order by rowid')

db.transaction {
  sel.each do |i|
db.execute('replace into shelf (key, value) values(?,?)', i, i)
  end
}

puts 'done'


By executing the select outside of the context of the transaction, I 
obtain the desired transaction isolation.

FWIW,


John Elrick
Fenestra Technologies
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Roger Binns wrote:
> Yang Zhang wrote:
>> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
>> shelf ORDER BY ROWID')):
> 
> You are converting the key which is an integer into a string for no
> apparent reason.

I copied and pasted this code straight from my actual application, which 
uses blobs instead of integers, which I need to convert into strings 
(since Python interfaces with blobs using the `buffer` type, not `str`).

> 
> If you also ask for the ROWID you will see that what is happening is a
> new rowid is generated for the replaced row so that if you are iterating
> over the table while modifying it then you effectively have an infinite
> length table.

This is unusual for a RDBMS that claims to provide ACID properties - in 
particular, this is not even as strong an isolation level as snapshot 
isolation, as a reader transaction is able to see a concurrent writer 
transaction's effects.  In fact, this is weaker than the weakest 
isolation level in (say) Postgresql, which is READ COMMITTED (in which 
any statement is guaranteed to not see the effects of a transaction that 
is committed after the query has started execution).

> 
>> Any way to solve this problem?  
> 
> You currently have the SELECT results being read back one at a time
> (lazily) on each iteration of the for loop.  The simplest solution is to
> read them all in first.  Add .fetchall() after the execute.

Unfortunately in the original application the table is large (many GBs). 
  Any way to solve this problem without first reading everything into 
memory, and without manually creating a second copy of the table?  Is 
there no way to request a stronger isolation level, such as snapshot 
isolation?
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Igor Tandetnik
"Roger Binns"  wrote in
message news:4a1127e1.1040...@rogerbinns.com
> If you also ask for the ROWID you will see that what is happening is a
> new rowid is generated for the replaced row so that if you are
> iterating over the table while modifying it then you effectively have
> an infinite length table.

Note further that, since there are no UNIQUE or PRIMARY KEY constraints 
on any columns, REPLACE INTO is equivalent to INSERT INTO. In other 
words, no rows are being deleted, just new ones added.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction isolation

2009-05-18 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Yang Zhang wrote:
> for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
> shelf ORDER BY ROWID')):

You are converting the key which is an integer into a string for no
apparent reason.

If you also ask for the ROWID you will see that what is happening is a
new rowid is generated for the replaced row so that if you are iterating
over the table while modifying it then you effectively have an infinite
length table.

> Any way to solve this problem?  

You currently have the SELECT results being read back one at a time
(lazily) on each iteration of the for loop.  The simplest solution is to
read them all in first.  Add .fetchall() after the execute.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkoRJ9kACgkQmOOfHg372QT/JgCfRImM5e85JCgn3bmp45zGm6j6
uQMAn11x9OfWdBUMwq/6zZdvSCSuGGGS
=ABo+
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction isolation

2009-05-18 Thread Yang Zhang
Hi, for some reason the following program will loop forever:

#!/usr/bin/env python

import sqlite3

conn = sqlite3.connect(':memory:')
conn.text_factory = bytes
conn.execute('CREATE TABLE shelf (key INTEGER NOT NULL, value INTEGER 
NOT NULL)')
for i in xrange(3):
   conn.execute('INSERT INTO shelf (key, value) VALUES (?,?)', (i, i))
conn.commit()

for i in (str(row[0]) for row in conn.cursor().execute('SELECT key FROM 
shelf ORDER BY ROWID')):
   conn.execute('REPLACE INTO shelf (key, value) VALUES (?,?)', (i, i))
   conn.commit()
   print i

Anybody understand why?  I thought the REPLACE and SELECT transactions 
should be (snapshot) isolated from each other, so why does the SELECT 
keep getting updated rows from the REPLACE?  Any way to solve this 
problem?  So far all I've found are commands that can change the 
connection-level isolation/locking, but not the cursor-level 
(transaction-level) isolation.

Thanks in advance for any answers!
-- 
Yang Zhang
http://www.mit.edu/~y_z/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   >