Re: [sqlite] Feature request: extend the IN operator

2011-08-07 Thread Edzard Pasma
On 7-aug-2011, om 13:17, Alexey Pechnikov wrote:

> Is it possible to support construction like to
>
> where x IN "1 2 3"
>
> There are a lot of situations when is very useful to store list of
> identifiers in table field.
> I know about the intarray virtual table but it's need
> constructor/destructor calls (and
> produce segfaults with the autorizer function).
>
> Does somebody interesting in this functionality too?
>
> --  
> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/

Hello,

I would be interested in this functionality too, or more generalized  
in an inverse function for group_concat. Earlier, group_split was  
proposed. For SQLite I imagine this to become a virtual table. I  
created one, through APSW, with the following schema:

 group_split (line, words)

(when I use .schema group_split in the APSW shell, it  only display  
the module name)

The dialog below shows its usage. I'm not completely happy about it  
yet. The predefined column names (line and words) seem artificial. It  
lacks the option to specify a seperator character. I imagine a third  
column but that will meke the definition of the filter much more  
difficult. A built-in soultion would be welcome.

Best regards,  Edzard Pasma.

 SQLite version 3.7.7.1 (APSW 3.7.7.1-r1)
 Enter ".help" for instructions
 Enter SQL statements terminated with a ";"
 sqlite> .schema group_split
 CREATE VIRTUAL TABLE group_split USING mymod();
 sqlite> create table t1 (grp, elem);
 sqlite> insert into t1 values ('g1', 'aap');
 sqlite> insert into t1 values ('g1', 'noot');
 sqlite> insert into t1 values ('g1', 'mies');
 sqlite> insert into t1 values ('g2', 'wim');
 sqlite> insert into t1 values ('g2', 'zus');
 sqlite> insert into t1 values ('g2', 'jet');
 sqlite> select grp, group_concat (elem) from t1 group by grp;
 g1|aap,noot,mies
 g2|wim,zus,jet
 sqlite> select grp, word
 from (select grp, group_concat (elem) elems from t1 group by grp)
 join group_split on line = elems;
 g1|aap
 g1|noot
 g1|mies
 g2|wim
 g2|zus
 g2|jet

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Big difference in performance between Python and gcc

2011-06-02 Thread Edzard Pasma
Op 2-jun-2011, om 23:17 heeft Simon Slavin het volgende geschreven:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> On 2 Jun 2011, at 7:24pm, Roger Binns wrote:
>
>> (Incidentally I am the author of a "competing" Python SQLite  
>> binding and
>> hence know exactly which SQLite API calls result from bits of  
>> Python hence
>> being very pedantic about getting these tests the same.)
>
> How does your own Python binding perform ?  Is it fast like the  
> other Python binding, or slow like the GCC-compiled C code the OP  
> reported ?  Do you understand the strange result the OP reported ?
>
> Simon.

If I may answer the first question: APSW is even 31.7 % faster than  
the default Python wrapper, using the version below. It is however  
relevant to drop the table before each new test run. (I changed  
'create if not exists' to 'drop if exists' in both tests).

import apsw
import timeit
conn = apsw.Connection('test1.sqlite')
c=conn.cursor()
c.execute('''DROP TABLE IF EXISTS values_log; CREATE TABLE values_log 
(acquisition
INTEGER,chunk INTEGER, acq_data BLOB);
  CREATE INDEX IF NOT EXISTS values_step ON
values_log(acquisition,chunk);
   ''' )

def f():
   data="01234567"*1024
   with conn:
 for i in range(0,1):
   conn.cursor ().execute("INSERT INTO values_log VALUES (?,?,?)", 
(1,i,data))
   if not i%1: print i
ret=timeit.timeit(f,'gc.enable()',number=1)
print ret


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] completion of sql words

2011-04-09 Thread Edzard Pasma

Op 9-apr-2011, om 21:29 heeft Roger Binns het volgende geschreven:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 04/08/2011 02:40 PM, Edzard Pasma wrote:
>> Just found that the APSW shell does tabcomplete (and even for
>> tablenames). It is described here:
>>
>>   http://apidoc.apsw.googlecode.com/hg/shell.html
>
> (I'm the APSW author). It also does output colouring.  Makes it a  
> lot easier
> to distinguish numbers from strings of digits etc.  I'd also submit  
> that the
> .dump output is a thing of beauty :-)
>
> It also completes database names, column names, builtin SQLite  
> function
> names, collations, etc(*).  However the completion is not context  
> sensitive
> (ie using a grammar) so it is really just completing a sequence of
> independent words.  This is just fine most of the time and in many  
> cases
> almost any word is allowable at almost any point anyway.  I did  
> look into
> trying to hack the grammar (which really is a bunch of grammar rules
> interspersed with C code) or using the railroad diagram input in  
> order to
> make a SQLite SQL parser.  In the case of completion that is even more
> difficult since it would have to tolerate an incomplete string.   
> Maybe one
> day...
>
> Pragmas are also completed and they are context dependent so if you do
> "pragma journal_mode =" it will show the journal modes available.
>
> Unfortunately I do have hard coded tables of many things since it  
> isn't
> possible to find them by asking SQLite at runtime.  Some such as  
> the list of
> registered functions and how many arguments they take can only be  
> provided
> by code changes to SQLite itself.  It would be nice if there were  
> virtual
> tables allowing dynamic introspection of SQLite.
>
> (*) In a fit of OCD it even pays attention to what case you are  
> using and
> does completions in the same case even if you mix cases in the same  
> word!
>
> Roger
>
I just made an alias in the shell (Unix-) for this tool:

alias apsw='python -c "import apsw;apsw.main()"'

Thats worth it. So far I'd only used the shell programmatically from  
Python, an other great  feature. This was particularly for its  
dataload capabilities.

Also reported an issue that has nothing to do with tabcompletion. The  
tool does not quit as the SQLite shell does when queries are passed  
as parameter. The SQLite behaviour is probably more convenient. But  
anyway I understand that all SQLite examples should work  
correspondingly.

Thanks a lot.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] completion of sql words

2011-04-09 Thread Edzard Pasma
Here is the unbroken link:

http://apidoc.apsw.googlecode.com/hg/shell.html

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] completion of sql words

2011-04-08 Thread Edzard Pasma
Op 11-feb-2011, om 23:30 heeft Simon Slavin het volgende geschreven:

>
> On 11 Feb 2011, at 7:19pm, prad wrote:
>
>> does the sqlite3 interface have completion of sql key words?
>> in postgresql you can type SEL and press tab to complete.
>> is there such a thing for sqlite3?
>
> sqlite3 doesn't have an interface.  It is only a programming API.
>
> Although you can download a command-line tool for sqlite3 from the  
> sqlite3 web site, it's provided just for convenience and many  
> people don't use it (or even know about it).
>
> Simon.

Just found that the APSW shell does tabcomplete (and even for  
tablenames). It is described here: http://apidoc.apsw.googlecode.com/ 
hg/shell.html. Wow.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Permanently disabling AutoCommit

2011-03-24 Thread Edzard Pasma
Op 24-mrt-2011, om 13:28 heeft Igor Tandetnik het volgende geschreven:

> Jaco Breitenbach <jjbreitenb...@gmail.com> wrote:
>> Is there an easy way to permanently disable the default AutoCommit  
>> using the
>> C API?  The nature of my application is such that I don't know  
>> exactly where
>> a new transaction begins.  Records are inserted into the database,  
>> and at
>> the end an explicit COMMIT is requested.
>
> Can't you just exectue an explicit BEGIN at the start, just as you  
> exectue COMMIT at the end?
> -- 
> Igor Tandetnik
Hi,

There are cases when you can not expect explicit BEGIN statements,  
like in the python-sqlite wrapper. The common DB interface  
specification requires an automatic BEGIN here. The current wrapper  
injects this before the first DML startement that it encounters in  
each transaction. It uses strcmp to determnine the statement type,  
which works satisfactorily though not full proof.

The most straight forward solution is to call sqlite3_get_autocommit  
before each execute step and do a BEGIN first if this returns true.  
The disadvanatage is that also queries cause a transaction. But it is  
straightforward and you do not need to know about statement types and  
their effect on transactions.

Edzard Pasma. 
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BUG (3.7.5): identifier quotation characters get part of column names for certain statements

2011-02-11 Thread Edzard Pasma
Op 11-feb-2011, om 10:41 heeft Nißl Reinhard het volgende geschreven:

> Hi Jay,
>
> I understand that it is not possible to define what the correct  
> output is.
>
> But what I am asking for is a simple convention which doesn't  
> behave more badly in general (i. e. it's still database behavior)  
> but much better in my case and most likely for many cases of other  
> users too.
>
> The convention is to use the dequoted column identifier as column  
> name in case the column expression translates to a single identifier.
>
> You have to dequote the identifier anyway to find the column in the  
> subselect to which this expression refers to.
>
> So all I'm asking for is to change the default database behavior to  
> yield more obvious or consistent default column names. In case the  
> database behavior doesn't fit, one has to use the AS clause anyway.
>
> For the below mentioned join, sqlite3 currently behaves like that:
>
>   select [x].[a], [y].[a] from x join x y on x.a = y.a;
>
>   a|a
>   1|1
>
> Hence, it simply uses the column names. And the next statement does  
> that too:
>
>   select [x].[a] from x;
>
>   a
>   1
>
> So in my opinion the default behavior of the database should be to  
> yield the same column name even for this statement:
>
>   select [x].[a] from (select a from x) x;
>
> But it currently returns:
>
>   [x].[a]
>   1
>
> I'd like to create a patch which changes the behavior in that way,  
> but I'm not that used to the sqlite3 internals. From a quick glance  
> at the source, I think it has something to do with TK_COLUMN and  
> TK_VARIABLE. It would be nice if you could give me a pointer where  
> to place the change in the source code.
>
> Bye.
> --
> Reinhard Nißl

Hi,

The engine already determines unquoted column names when creating  
views. This also applies with inline-vies. For instance:

sqlite>.mode line
sqlite> select * from (select [x].[a] from (select a from x) x);
 a = 123

This also cuts off the unnecessary table alias.

If column descriptions could be changed to be like this by default?

Edzard Pasma


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] specific behavior of busy_timeout with different transaction modes

2010-11-10 Thread Edzard Pasma

On 9-nov-2010, at 18:22 Peter Pawlowski wrote:

> While debugging an issue with using SQLite with a Rails  
> application, we
> discovered that the behavior of SQLite when setting the
> 'sqlite3_busy_timeout' option was unexpected.
>
> After reading and grokking the SQLite documentation about locking (or
> doing our best to, at least!) it seems that this is the expected
> behavior, but we'd like to confirm.
>
> Basically, we were surprised that in a specific case, an INSERT
> statement will fail immediately and not busy wait, even though a busy
> handler has been set.
>
> For example, given two sqlite3 shell processes, A and B:
>
>A1. begin immediate;
> # => locks the db
>B1. .timeout 5000
>B1. select * from foo;
> # => succeeds
>B2. insert into foo values ("bar");
> # => hangs for 5 seconds, then fails with SQLITE_BUSY
>B3. begin deferred transaction;
>B4. insert into foo values ("bar");
> # => hangs for 5 seconds, then fails with SQLITE_BUSY
>B5. rollback;
>B6. begin deferred transaction;
>B7. select * from foo;
> # => succeeds
>B8. insert into foo values ("bar");
> # => fails immediately with SQLITE_RUBY
>B9. rollback;
>
> The surprising behavior is that step B8 fails immediately and does not
> retry for 5 seconds, which is what we were expecting since the
> busy_timeout is set in this case.
>
> Can someone confirm that this is the intended behavior?
>
> If so, we'd suggest some more documentation about how SQLite behaves
> when a busy handler is set. Of course we'd be happy to help by
> contributing content for the documentation.
>
> If you're interested: The specific consequence of this behavior is  
> that
> our Rails application with multiple processes fails with SQLITE_BUSY
> whenever concurrent write requests are made to the database. The Rails
> code allows setting busy_timeout, however it doesn't end up having any
> real effect because all SQLite interactions made by ActiveRecord (the
> Rails database abstraction) end up being complex transactions of type
> 'deferred' (the type is set in the sqlite3-ruby gem). This is probably
> one reason why the Rails folks seem to have the opinion that SQLite is
> not suitable for production systems.
>
> Once I confirm that this is the expected behavior of SQLite, we will
> attempt to address this issue with the sqlite3-ruby or Rails folks,
> since this would clearly need to be addressed at the application  
> level.
>
> thanks for your help!
>
> Peter

The immediate busy-error is indeed intended behaviour, or rather a  
consequence of the technical design.

It took me a while to find where it is documented. That is not in  
"Locking and Concurrency in SQLite version 3" but in the "C/C++ API  
Reference". And there it is not in the description of  
sqlite3_busy_timeout but under sqlite3_busy_handler. There it is  
explained clear enough.

What has surprised me that this deadlock is unchanged when accessing  
a database in WAL mode. Also then it is intended behaviour, this was  
explained in an earlier discussion. But I don't know if it will  
remain like this in future versions.

Best Regards, Edzard.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL still detects deadlock

2010-07-31 Thread Edzard Pasma

Op 31-jul-2010, om 18:30 heeft Dan Kennedy wrote:
>
> On Jul 31, 2010, at 10:47 PM, Edzard Pasma wrote:
>
>> Op 31-jul-2010, om 14:16 heeft Dan Kennedy wrote:
>>
>>>
>>> On Jul 31, 2010, at 12:02 AM, Edzard Pasma wrote:
>>>
>>>> Hello,
>>>>
>>>> The following scenario raises a BUSY error immediately
>>>>
>>>> process A. keeps a write lock
>>>> process B keeps a read-lock and tries to promote this to a write-
>>>> lock
>>>>
>>>> This is the traditional SQLite deadlock situation, detected by the
>>>> engine as soon as possible.
>>>>
>>>> In my test this still occurs unchanged after switching to WAL.
>>>> Should
>>>> process B not be allowed to carry on now? Using WAL, A. can still
>>>> commit while B. keeps a read-lock.
>>>
>>> A can commit while B has the read-lock. But there is no
>>> point in B using the busy-handler to wait until A has
>>> committed, because after A has committed B will be reading
>>> from an out-of-date snapshot. And you cannot update the
>>> database if you are reading from an out-of-date snapshot.
>>>
>> This is a pity. From the operational point of view, there is no read-
>> consistency required among the original query and the following
>> updates or further queries.
>
> So why do them within the same transaction?

That is not on purpose. It may occur if one uses more than one cursor  
with a single connection even in auto-commit mode.  I thought this  
deadlock is fairly common with SQLite.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL still detects deadlock

2010-07-31 Thread Edzard Pasma
Op 31-jul-2010, om 14:16 heeft Dan Kennedy wrote:

>
> On Jul 31, 2010, at 12:02 AM, Edzard Pasma wrote:
>
>> Hello,
>>
>> The following scenario raises a BUSY error immediately
>>
>> process A. keeps a write lock
>> process B keeps a read-lock and tries to promote this to a write-lock
>>
>> This is the traditional SQLite deadlock situation, detected by the
>> engine as soon as possible.
>>
>> In my test this still occurs unchanged after switching to WAL. Should
>> process B not be allowed to carry on now? Using WAL, A. can still
>> commit while B. keeps a read-lock.
>
> A can commit while B has the read-lock. But there is no
> point in B using the busy-handler to wait until A has
> committed, because after A has committed B will be reading
> from an out-of-date snapshot. And you cannot update the
> database if you are reading from an out-of-date snapshot.
>
This is a pity. From the operational point of view, there is no read- 
consistency required among the original query and the following  
updates or further queries. 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] WAL still detects deadlock

2010-07-31 Thread Edzard Pasma
Hello,

The following scenario raises a BUSY error immediately

process A. keeps a write lock
process B keeps a read-lock and tries to promote this to a write-lock

This is the traditional SQLite deadlock situation, detected by the  
engine as soon as possible.

In my test this still occurs unchanged after switching to WAL. Should  
process B not be allowed to carry on now? Using WAL, A. can still  
commit while B. keeps a read-lock.

Thanks for this version, -- Edzard 
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index not used in simple alias-like views

2010-05-24 Thread Edzard Pasma

Op 24-mei-2010, om 17:59 heeft Dan Kennedy het volgende geschreven:

>
> On May 24, 2010, at 6:14 PM, Edzard Pasma wrote:
>
>> Hello, I found a blind spot of the query optimizer. This appears when
>> a table is accessed as a view. I think the problem can be phrased as
>> "the optimizer failing to push an outer join predicate into a
>> view"... This simply means that the following example does not use
>> the existing index:
>>
>> create table t (pk integer primary key);
>> create table t2 (fk);
>> create view v as select * from t;
>> select * from t2 left outer join v on pk = fk;
>
> Not using the index because of point 3 on this list I would say:
>
>http://www.sqlite.org/optoverview.html#flattening
>
Sorry for considering this a shortcoming.
By the way, this was observed with version 3.6.23.1. The recently  
proposed optimizer changes appear to  relieve this situation as this  
is a case for an automatic index. Cheers.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index not used in simple alias-like views

2010-05-24 Thread Edzard Pasma
Hello, I found a blind spot of the query optimizer. This appears when  
a table is accessed as a view. I think the problem can be phrased as  
"the optimizer failing to push an outer join predicate into a  
view"... This simply means that the following example does not use  
the existing index:

create table t (pk integer primary key);
create table t2 (fk);
create view v as select * from t;
select * from t2 left outer join v on pk = fk;

I hope I may call this a shortcoming of the optimizer and that it  
will be changed is some future version. Best regards, Edzard.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help test the latest query planner changes

2010-04-18 Thread Edzard Pasma

Op 16-apr-2010, om 19:51 heeft D. Richard Hipp het volgende geschreven:

>
> On Apr 16, 2010, at 11:52 AM, Max Vlasov wrote:
>>
>> SELECT StihiAuthors.Id As AuthId, StihiAuthCandidates.Date as Date,
>> StihiAuthCandidates.Num as Num FROM StihiAuthors
>>  INNER JOIN StihiAuthCandidates ON
>> StihiAuthors.Id=StihiAuthCandidates.AuthorId
>>  LEFT JOIN StihiPoems ON Date=StihiPoems.PoemDate AND
>> Num=StihiPoems.PoemNum
>>  WHERE StihiAuthors.IsFav=1 AND StihiPoems.rowid Is Null
>>
>> sqlite3-amalgamation-3_6_23_1.dll
>>  reported 747 milliseconds returning 22,642 rows
>>
>> sqlite3-20100415132938.dll
>>  reported 563 milliseconds
>
> Thanks for the report!
>
> It is reassuring to know that the recent changes actually did some  
> good!
>
>
> D. Richard Hipp
> d...@hwaci.com

There are however cases where the change makes things slower. For  
instance, if a program does only fetch the first row from a query.  
Possibly this is also the case in the above example if you add a  
'LIMIT 1' condition. It my test it was three times slower, comparing  
the two SQLite versions. But this likely also depends on the test  
data and indexes. The good news is that I observed a more spectacular  
improvement, when fetching all rows. This was four ot five times faster!

Another query that appears slower is a mega-union like this

SELECT  ...
UNION ALL
SELECT ...
UNION ALL
SELECT ..
UNION ALL
SELECT ..

where each select includes joins and where only one of the branches  
of the union yields a row. It looks like the join in each branch  
causes some extra overhead in the new version. This is at execution  
time, using pre-compiled SQL statements. The difference is relatively  
big, a factor 10 or so.

Though the differences are relatively big, they are absolutely in the  
order of mili seconds. So this is likely not an issus. Still it may  
be in particular applications, when queries are repeated very often.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reduce database file size

2010-04-12 Thread Edzard Pasma
If the subject is transferring data from a web server to a central  
database, I like to refer to a feature of an open source package that  
I published myself. It is only a side-side feature, resulting from  
the general design, but it was added with just this in mind. This  
feature, 'Connecting over the internet', is described in http:// 
packages.python.org/sqmediumlite/#5.4 (this is only for Python users).

Op 12-apr-2010, om 6:57 heeft Roger Binns het volgende geschreven:

> Nikolaus Rath wrote:
>> I see. I gues I got confused by the help that refers to the  
>> parameter as
>> 'cmd':
>
> The help shows that a list of TABLE is taken in the same syntax as
> SQLite's shell uses.  Also note that the help is formatted for
> documenting interactive usage rather than API usage.
>
>> That works, thanks! But why is it *that* slow?
>
> Because it is written in Python and SQLite's is in C.  I've made no
> effort to optimise the Python - please send me a sample database and
> I'll see what can be done.
>
> Additionally the Python supports more functionality which does have an
> effect on speed.  For example you can choose what encoding is used.
>
> Roger
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Passing Value from one query to another

2010-02-06 Thread Edzard Pasma

Op 6-feb-2010, om 18:03 heeft Eric Bohlman het volgende geschreven:

> BareFeet wrote:
>> In general, I think it's much better (performance and logic) to do
>> all you can in SQL, without passing values out of SQL results, into
>> your non-SQL code, then re-injecting back into another SQL query etc.
>
> With SQLite, that's not really going to make a difference. Since  
> it's an
> embedded library rather than an out-of-process server,
> "passing/injecting" doesn't require marshalling or network  
> overhead, so
> there's no performance hit. In terms of logic, since SQLite doesn't  
> have
> stored procedures the idea of centralizing database interactions into
> the DB itself rather than distributing the logic between applications
> isn't applicable here.


If you are in favour of centralizing processing in the database, then  
an SQLite temporary table could serve as a vehicle for passing  
results from one query to another. I hope this needs not to much  
clarification. The first query insert the parameter into the temp  
table. The other reads it back using a sub-query.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reasons for SQLITE_CANTOPEN

2010-01-11 Thread Edzard Pasma

Op 11-jan-2010, om 1:15 heeft Nikolaus Rath het volgende geschreven
> Edzard Pasma <pasm...@concepts.nl> writes:
>> Op 10-jan-2010, om 19:25 heeft Nikolaus Rath het volgende geschreven:
>>
>>> Filip Navara <filip.nav...@gmail.com> writes:
>>>>> I am accessing the same database from several threads, each  
>>>>> using a
>>>>> separate connection. Shared cache is not enabled.
>>>>>
>>>>> When my program has been running for a while, I suddenly get an
>>>>> SQLITE_CANTOPEN error when I'm trying to open a database
>>>>> connection with
>>>>> a new thread. The database file, however, is definitively present
>>>>> and
>>>>> accessible.
>>>> Hi!
>>>>
>>>> Does "pragma journal_mode=truncate;" make any difference?
>>>> Is this on Windows?
>>>> Do you have TortoiseSVN installed on the same system?
>>>
>>> No to all questions, I'm afraid. Seems that my problem is a
>>> different one.
>>
>> Does your application attach at least 20 further databases within
>> each of the 15 connections?
>> Does it open at least 250 files any other way?
>>
>> If any yes, then you have too many open files!
>
> No, there is only one database for each connection. The idea with the
> open files may still be a good one though, I will look into that. But
> why should the limit be 250? On this system I have an ulimit of 1024
> open fds, and I guess that on other systems it would at least still be
> some power of 2.

The number of 250 was just a rough indication, taking into account  
already open database connections and other IO channels. I reasoned  
that if you have that many open files and get the particular error,  
it is almost sure it is caused by the open files limit. On my system  
that is 256. I had not expected it to vary so much. up to 8k on some  
enterprise Linux distribution.

Hope strace (see Roger Binns' post) will help you further. Also lsof  
may help.

regards, Edzard Pasma

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reasons for SQLITE_CANTOPEN

2010-01-10 Thread Edzard Pasma

Op 10-jan-2010, om 19:25 heeft Nikolaus Rath het volgende geschreven:

> Filip Navara <filip.nav...@gmail.com> writes:
>>> I am accessing the same database from several threads, each using a
>>> separate connection. Shared cache is not enabled.
>>>
>>> When my program has been running for a while, I suddenly get an
>>> SQLITE_CANTOPEN error when I'm trying to open a database  
>>> connection with
>>> a new thread. The database file, however, is definitively present  
>>> and
>>> accessible.
>> Hi!
>>
>> Does "pragma journal_mode=truncate;" make any difference?
>> Is this on Windows?
>> Do you have TortoiseSVN installed on the same system?
>
> No to all questions, I'm afraid. Seems that my problem is a  
> different one.
>
>
>-Nikolaus

Does your application attach at least 20 further databases within  
each of the 15 connections?
Does it open at least 250 files any other way?

If any yes, then you have too many open files!

Regards, Edzard Pasma


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Rollback over pending queries

2009-10-22 Thread Edzard Pasma
--- edz...@volcanomail.com wrote:

> I wonder if an automatic rollback, as described in
> //www.sqlite.org/c3ref/busy_handler.html, is more powerful than a 
> rollback programmed in SQL. Particularly if it is able to rollback 
> pending queries from other cursors in the same connection. The 
> programmed rollback fails here with BusyError: cannot rollback 
> transaction - SQL statements in progress. 
> 
> I tried to find out experimentally but failed to reproduce any automatic 
> rollback. It seems another solution is in place. I may be not well 
> informed, is anything written about this already?
> 
> From the timeline it appears there is ongoing development on the 
> subject. Is there any chance that a rollback in SQL will no longer fail 
> with pending queries?

Hello again,

The mechanism of an automatic rollback, as described in 
//www.sqlite.org/c3ref/busy_handler.html, appears to have disappeared somewhere 
between version 3.5.9 and 3.6.5. The following steps should show the different 
behaviour.

- Create a table containing about 1 M byte of data
(i hope anybody can imagine this for himself)

- Let one process keep a shared lock

$ sqlite3 test.db "select * from t" | more
aa
...
(leave this command unfinished)

- Start a large transaction, first in the old version

$ sqlite-amalgamation-3.5.9/sqlite3 test.db
SQLite version 3.5.9
Enter ".help" for instructions
sqlite> begin;
sqlite> insert into t select * from t;
sqlite> insert into t select * from t;
SQL error: disk I/O error
sqlite> commit;
SQL error: cannot commit - no transaction is active

- Same for the current version

$ sqlite3 test.db 
SQLite version 3.6.19
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> begin;
sqlite> insert into t select * from t;
sqlite> insert into t select * from t;
sqlite> commit;
SQL error: database is locked

The current behaviour looks better, the application may still commit once the 
database is unlocked. However it also has a disadvantage which appears from the 
process status. At the current point this shows:

$ ps -lp 14440
  UID   PID  PPID CPU PRI NI  VSZRSS COMMAND
  501 14440  4281   0  31  029936   7088 sqlite3

If the insert step is repeated once again, this becomes:

$ ps -lp 14440
  UID   PID  PPID CPU PRI NI  VSZRSS COMMAND
  501 14440  4281   0  31  038176  14056 sqlite3

The memory usage increases about 7000 pages which is in line with the amount of 
data added (4M). This must become a problem if dealing with gigabytes.

I could not find anything written about the change, except a quote from Dan 
Kennedy in the mailing list: 
http://www.mail-archive.com/sqlite-users@sqlite.org/msg42059.html. But then the 
busy_handler description must be out of date.

This derived a bit from the subject, I still would like the more powerful 
rollback in SQL.

Tnanks, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Rollback over pending queries

2009-10-19 Thread Edzard Pasma
Hello,

I wonder if an automatic rollback, as described in
//www.sqlite.org/c3ref/busy_handler.html, is more powerful than a rollback 
programmed in SQL. Particularly if it is able to rollback pending queries from 
other cursors in the same connection. The programmed rollback fails here with 
BusyError: cannot rollback transaction - SQL statements in progress. 

I tried to find out experimentally but failed to reproduce any automatic 
rollback. It seems another solution is in place. I may be not well informed, is 
anything written about this already?

>From the timeline it appears there is ongoing development on the subject. Is 
>there any chance that a rollback in SQL will no longer fail with pending 
>queries?

Thanks for any info, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Rollback over pending queries

2009-10-19 Thread Edzard Pasma
Hello,

I wonder if an automatic rollback, as described in
//www.sqlite.org/c3ref/busy_handler.html, is more powerful than a rollback 
programmed in SQL. Particularly if it is able to rollback pending queries from 
other cursors in the same connection. The programmed rollback fails here with 
BusyError: cannot rollback transaction - SQL statements in progress. 

I tried to find out experimentally but failed to reproduce any automatic 
rollback. It seems another solution is in place. I may be not well informed, is 
anything written about this already?

>From the timeline it appears there is ongoing development on the subject. Is 
>there any chance that a rollback in SQL will no longer fail with pending 
>queries?

Thanks for any info, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] readers and writer

2009-10-14 Thread Edzard Pasma

--- prii...@stanford.edu wrote:
> 
> Simon Slavin wrote:
>> On 14 Oct 2009, at 1:21am, priimak wrote:
>>
>>   
>>> I am heaving small problem with sqlite. I have a webapp which connects
>>> to the database using sqlite-jdbc and performs SELECTs to response to
>>> different GET requests, while this happens if I try to write to a
>>> database ( UPDATE or INSERT ) from command line, that (i.e. update
>>> process) would occasionally fail with error message "SQL error near  
>>> line
>>> 1: database is locked". Notice that I have only one writer, but many
>>> readers. Reading documentation (http://www.sqlite.org/ 
>>> lockingv3.html) I
>>> was under impression that process which intends to update database  
>>> will
>>> place it in the pending state allowing all currently running reads (
>>> SELECTs ) to proceed, while blocking new SELECTs, the lock database
>>> apply changes and then unlock it allowing all pending and new  
>>> SELECTs to
>>> proceed. Am I right about it and if so why do I "SQL error near line  
>>> 1:
>>> database is locked" when trying to write to a database?
>>> 
>>
>> I hope this will do until an expert comes along.  I think you got it  
>> right, you just don't know something.
>>
>> The SELECT activity requires a lock to the database.  For instance,  
>> consider a TABLE contact with columns name, address, phone .  An index  
>> is declared on just the name column.  You execute
>>
>> SELECT phone FROM contacts WHERE name = 'Jackie'
>>
>> This requires a two-stage process: first use the index to find the ids  
>> of the rows which have the right name.  Then look up those rows in the  
>> table and find out which phone numbers they have.  Obviously, this  
>> requires locking: you wouldn't want someone to make changes to the  
>> table between those two steps.  However, it requires locking only  
>> against writing: other reads going on at the same time are harmless,  
>> but a change between the two steps can invalidate the data.
>>
>> So if a SELECT is in progress, other SELECT commands can be allowed to  
>> proceed without problems. But no INSERT or UPDATE can be allowed until  
>> the SELECT is finished.  Hence you will sometimes get a lock on the  
>> write.
>>
>> How you deal with this, I don't know.  Random wait-and-try-again ?
> Yes, I understood that, but the impression I got is that SELECT will
> place shared lock on the database. While INSERT or UPDATE will first
> place PENDING lock indicating that it wants to write. While it is in a
> PENDING lock state all operations that placed SHARED lock ( such as
> SELECTs ) will allow to complete and new SHARED locks either denied
> or blocked ( this part of documentation is not clear as to which one of
> these two actions are taken  ). Then when all SHARED locks are
> removed due to completion of SELECTs, database moves from PENDING
> into EXCLUSIVE lock, which is cleared when update/write completed and then
> new/pending SHARED locks are allowed to proceed. This should mean
> that with many processes reading and only one writing there is no need to
> use sqlite3_busy_timeout() function, which is to be used when we have
> many processes trying to write to the database and/or reader if new
> SHARED locks are denied while database is in a PENDING and/or
> EXCLUSIVE lock state ( again, this point it not clear in documentation ).
> Do I understand it correctly?
> 
> --
> Dmitri Priimak

Hello Dmitri, I understaod it the same way. However recently I observed that a 
PENDING lock does not perform its useful function (prevent writer starvation) 
in case readers and writers are threads of a single process! May that be the 
case? Best regards, Edzard Pasma. 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] double calculation in SELECT

2009-10-05 Thread Edzard Pasma

--- paiva...@gmail.com wrote:

From: Pavel Ivanov <paiva...@gmail.com>
To: edz...@volcanomail.com, General Discussion of SQLite Database 
<sqlite-users@sqlite.org>
Subject: Re: [sqlite] double calculation in SELECT
Date: Mon, 5 Oct 2009 08:39:53 -0400

> No, I did not assume unique pairs of id, power. Actually there is a small 
> syntax error in the original query, which is however tolerated by SQLite. The 
> original query essentially (with respect to this point) looks like:

Yes, you assumed that, maybe unconsciously. ;-)
Look, you've proposed to change from

SELECT sum(...) FROM table2
WHERE table2.table1_id = table1.id

to

SELECT table1.id, sum(...) FROM table1, table2
WHERE table2.table1_id = table1.id
GROUP BY table1.id

So, if table1 has 2 rows with the same id then corresponding table2
rows will appear in the sum() twice and result of aggregation will be
different from the original query...

Pavel

On Mon, Oct 5, 2009 at 8:33 AM, Edzard Pasma <edz...@volcanomail.com> wrote:
>
>
> --- paiva...@gmail.com wrote:
>
> From: Pavel Ivanov <paiva...@gmail.com>
> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Subject: Re: [sqlite] double calculation in SELECT
> Date: Mon, 5 Oct 2009 07:13:28 -0400
>
> First of all, do you really store several rows with the same id in the
> table1? If no then you don't need 'group by' clause, if yes then I
> hope you understand that 'power' returned in your query is undefined,
> i.e. it can be returned from any row with this id - you can't control
> which one of them.
> And about the query: without some information about data stored in
> your tables and its structure we cannot give you a useful help. For
> example solution suggested by Edzard assumes that table1 has unique
> pairs id-power, otherwise it will return different results from your
> original query...
>
> Pavel
>
> --
>
> Hello Pavel,
>
> No, I did not assume unique pairs of id, power. Actually there is a small 
> syntax error in the original query, which is however tolerated by SQLite. The 
> original query essentially (with respect to this point) looks like:
>
> SELECT id, power, ..
> FROM table1
> GROUP BY id
>
> The column power is neither in the GROUP BY clause nor in an aggregate 
> function. I assume SQLite returns just a single row per id here, also if 
> there are more rows with the same id and different power.
>
> Other DBMS's do not accept this syntax and require to explicitly group by all 
> columns that occur outside of aggregate functions. SQLite is more tolerant. 
> Possibly that makes the unique sort faster.
>
> Edzard
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

Sorry, you were right. 

But I'm happy you noted the point as it is a peculiar part of the SQL syntax. I 
would have liked to leave out the second column (power) from the group by, 
assuming (indeed:-) that id is unique. But then I would hesitate to select the 
plain column, even though SQLite allows that. Formally one is forced to use 
some aggregate function e.g:

SELECT t1.id, MAX (t1.power)
FROM t1, ..
GROUP BY t1.id

But this is again ugly, as there is only one row to take the MAX of. IMHO and 
AFAIK the SQL standard misses a proper/propely named aggregate function for 
this case. I mean an ANY or FIRST function.

Thanks, Edzard

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] double calculation in SELECT

2009-10-05 Thread Edzard Pasma


--- paiva...@gmail.com wrote:

From: Pavel Ivanov 
To: General Discussion of SQLite Database 
Subject: Re: [sqlite] double calculation in SELECT
Date: Mon, 5 Oct 2009 07:13:28 -0400

First of all, do you really store several rows with the same id in the
table1? If no then you don't need 'group by' clause, if yes then I
hope you understand that 'power' returned in your query is undefined,
i.e. it can be returned from any row with this id - you can't control
which one of them.
And about the query: without some information about data stored in
your tables and its structure we cannot give you a useful help. For
example solution suggested by Edzard assumes that table1 has unique
pairs id-power, otherwise it will return different results from your
original query...

Pavel

--

Hello Pavel,

No, I did not assume unique pairs of id, power. Actually there is a small 
syntax error in the original query, which is however tolerated by SQLite. The 
original query essentially (with respect to this point) looks like:

SELECT id, power, ..
FROM table1
GROUP BY id

The column power is neither in the GROUP BY clause nor in an aggregate 
function. I assume SQLite returns just a single row per id here, also if there 
are more rows with the same id and different power.

Other DBMS's do not accept this syntax and require to explicitly group by all 
columns that occur outside of aggregate functions. SQLite is more tolerant. 
Possibly that makes the unique sort faster.

Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] double calculation in SELECT

2009-10-04 Thread Edzard Pasma


--- proxi...@land.ru wrote:

From: Michael 
To: sqlite-users@sqlite.org
Subject: [sqlite] double calculation in SELECT
Date: Sun, 04 Oct 2009 15:31:47 +0700

Hello,

I have following SELECT statement in my program (Delphi+sqlite3):

_
select
id,power,
(select sum 
((strftime("%s",table2.endtime)-strftime("%s",table2.begintime))*table2.power) 
from table2 where table1.id=table1_id),
(
select min(point) from points where points.table1_id=table1.id and 
point >=
(select sum 
((strftime("%s",table2.endtime)-strftime("%s",table2.begintime))*table2.power) 
from table2 where table1.id=table2.table1_id)
),

(
(
select min(point) from points where points.table1_id=table1.id
and point >=
(select sum 
((strftime("%s",table2.endtime)-strftime("%s",table2.begintime))*table2.power) 
from table2 where table1.id=table2.table1_id )
)
-
(select sum 
((strftime("%s",table2.endtime)-strftime("%s",table2.begintime))*table2.power) 
from table2 where table1.id=table1_id)
) / power
from table1
where standing=1
group by table1.id
order by table1.id


This statement contains text that is repeated 4 times:

(select sum 
((strftime("%s",table2.endtime)-strftime("%s",table2.begintime))*table2.power) 
from table2 where table1.id=table2.table1_id)

How to reduce this statement and avoid double calculations?


Thanks,
Michael.
___


Hello, the SQL below uses an 'inline view' type of sub-query. Edzard.

SELECT
  v.id, 
  v.power, 
  v.sum1, 
  v.sum1 / v.power,
  ...
FROM 
  (
  SELECT 
table1.id,
table1.power,
SUM () AS sum1
  FROM
table1,
table2
  WHERE
table2.table1_id = table1.id
  GROUP BY
table1.id,
table1.power
  ) v
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Busy Handler in Shared Cache Mode?

2009-08-11 Thread Edzard Pasma
More detailed testing revealed that it is not a reader but the writer who 
observes a long wait time. The writer appears to get unlocked after all 
reader-threads have completed. This must be a case of writer starvation. I had 
not realized that shared cache mode is, by default, prone to this. The good 
news is that Unlock Notification should handle it 
(http://www.sqlite.org/unlock_notify.html). 

--- edz...@volcanomail.com wrote:

From: "Edzard Pasma" <edz...@volcanomail.com>
To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
Cc: <sqlite-users@sqlite.org>
Subject: Re: [sqlite] Busy Handler in Shared Cache Mode?
Date: Mon, 10 Aug 2009 04:25:32 -0700

Hi Nikolaus,

Interesting to read your findings. I assume you finally disallowed 
read_uncommitted. Trying to explain why the wait times can get longer, I come 
to two reasons.

- the time slots where a read-lock is obtained become smaller. 
This can be significant if there are a lot of transactions with little time in 
between. With the busy handling by polling, a reader may mis slots. This will 
sure be improved by Unlock Notification.

- the time slots where a read-lock can not be obtained become longer.
This can cause problems if there are long-running transaction. The Unlock 
Notification feature is not going to help here.

I still like to share an observation from my own test, which reproduces the 
first case. I start four reading threads and one writer. Each thread performs a 
fixed number of cycles. Apart from the wait times, also the overall elapsed 
time is measured. Indeed the maximum wait-time gets bigger if shared cache mode 
is enabled. Interestingly, this does not apply to the elapsed time, which is 
still reduced. Thus, an increase of the maximum wait-time is not necessarily a 
worse performance. By the way, this was measured on a single-processor system.

Hope this is useful to know.

Best regards, Edzard.

On Fri, Aug 7, 2009 at 1:45 PM, Nikolaus Rath<nikol...@rath.org> wrote:
> Hello,
>
> I have program that continuously runs several threads (about 4 to 10)
> which concurrently read and write to an SQLite database. From
> http://www.sqlite.org/sharedcache.html it seems to me that I should be
> using shared cache mode.
>
> Until now, I have set the busy timeout to 500 msec and never had to 
deal
> with SQLITE_BUSY errors. However, there seems to be no corresponding
> function for the SQLITE_LOCKED errors that are generated in shared 
cache
> mode. So I changed the code manually sleep for a random amount (0 to 
100
> msec) and then try to execute the statement again if it encounters
> SQLITE_LOCKED. But now the threads are often waiting for more than 1
> second before they finally get their query executed.
>
> I suspect this is because the random sleep is wasting a lot of time,
> while without shared cache (and with enabled busy timeout) a thread
> blocks and gets revived as soon as the lock on the database is
> released.
>
>
> How can I avoid this problem? Is there a way to set a busy timeout for
> SQLITE_LOCKED as well? Or a more clever method instead of sleeping for
> random amounts? Or should I just avoid using shared cache mode?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Busy Handler in Shared Cache Mode?

2009-08-10 Thread Edzard Pasma
Hi Nikolaus,

Interesting to read your findings. I assume you finally disallowed 
read_uncommitted. Trying to explain why the wait times can get longer, I come 
to two reasons.

- the time slots where a read-lock is obtained become smaller. 
This can be significant if there are a lot of transactions with little time in 
between. With the busy handling by polling, a reader may mis slots. This will 
sure be improved by Unlock Notification.

- the time slots where a read-lock can not be obtained become longer.
This can cause problems if there are long-running transaction. The Unlock 
Notification feature is not going to help here.

I still like to share an observation from my own test, which reproduces the 
first case. I start four reading threads and one writer. Each thread performs a 
fixed number of cycles. Apart from the wait times, also the overall elapsed 
time is measured. Indeed the maximum wait-time gets bigger if shared cache mode 
is enabled. Interestingly, this does not apply to the elapsed time, which is 
still reduced. Thus, an increase of the maximum wait-time is not necessarily a 
worse performance. By the way, this was measured on a single-processor system.

Hope this is useful to know.

Best regards, Edzard.

On Fri, Aug 7, 2009 at 1:45 PM, Nikolaus Rath wrote:
> Hello,
>
> I have program that continuously runs several threads (about 4 to 10)
> which concurrently read and write to an SQLite database. From
> http://www.sqlite.org/sharedcache.html it seems to me that I should be
> using shared cache mode.
>
> Until now, I have set the busy timeout to 500 msec and never had to 
deal
> with SQLITE_BUSY errors. However, there seems to be no corresponding
> function for the SQLITE_LOCKED errors that are generated in shared 
cache
> mode. So I changed the code manually sleep for a random amount (0 to 
100
> msec) and then try to execute the statement again if it encounters
> SQLITE_LOCKED. But now the threads are often waiting for more than 1
> second before they finally get their query executed.
>
> I suspect this is because the random sleep is wasting a lot of time,
> while without shared cache (and with enabled busy timeout) a thread
> blocks and gets revived as soon as the lock on the database is
> released.
>
>
> How can I avoid this problem? Is there a way to set a busy timeout for
> SQLITE_LOCKED as well? Or a more clever method instead of sleeping for
> random amounts? Or should I just avoid using shared cache mode?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reasons for SQLITE_BUSY

2009-07-31 Thread Edzard Pasma
--- nikol...@rath.org wrote:
> "Edzard Pasma" <edz...@volcanomail.com> writes:
>> --- nikol...@rath.org wrote:
>>> "Igor Tandetnik" <itandet...@mvps.org> writes:
>>>> Nikolaus Rath <nikol...@rath.org> wrote:
>>>>> I am accessing the same database from different threads. Each
>>>>> thread
>>>>> has its own connection. I have set the busy timeout for each
>>>>> connection to 5000 milliseconds.
>>>>>
>>>>> However, in some testcases I still get SQLITE_BUSY errors from
>>>>> sqlite3_step. Moreover, the whole testcases run in much less than
>>>>> 5
>>>>> seconds, to apparently sqlite does not even try to wait for the
>>>>> lock
>>>>> to disappear.
>>>>
>>>> You are getting a deadlock. The scenario goes like this: thread A
>>>> runs a
>>>> transaction that starts as a reader (with a SELECT statement) but
>>>> later
>>>> becomes a writer (by executing INSERT, UPDATE or DELETE). Thread B
>>>> also
>>>> runs a transaction like this, or a simple writer transaction. Then
>>>> the
>>>> following sequence of events occurs:
>>>>
>>>> 1. Thread A starts as reader and takes a shared lock
>>>> 2. Thread B starts as writer, takes a pending lock and waits for
>>>>readers
>>>> to clear.
>>>> 3. Thread A tries to become a writer and promote its lock to
>>>>reserved,
>>>> but can't because there's already a writer on the database.
>>>>
>>>> The two threads deadlock. No amount of waiting by either thread
>>>> would
>>>> get them out of the impasse: the only way out is for one of the
>>>> threads
>>>> to roll back its transaction and start from scratch. When SQLite
>>>> detects
>>>> this situation, it returns SQLITE_BUSY immediately, without calling
>>>> the
>>>> busy handler (because, again, waiting won't help any).
>>>>
>>>> To avoid the possibility of a deadlock, start your
>>>> reader-turning-writer
>>>> transactions with BEGIN IMMEDIATE (this essentially makes the 
>>>> transaction a writer right away).
>>>
>>> Ah, I see. I expected that a deadlock would actually result in both
>>> threads hanging forever, rather than SQLite detecting it and
>>> abandoning
>>> immediately. The later is of course even better once you know about
>>> it.
>>> Thanks for the explanations! I should be able to fix my problem
>>> now..
>>
>> Hi,
>>
>> Just in case it appears difficult to fix, I like to suggest to try
>> using shared cache mode. The shared cache locking model does not have
>> this particular deadlock situation. I'm assuming that the database is
>> accessed from within a single process only.

> Thanks for the idea. But after reading:
> //www.sqlite.org/sharedcache.html it seems to me that to avoid the
>  deadlock, I would not only need to enable shared cache mode but
>  read-uncommitted. Is that right?
>
> I'm hesitating a bit to do that, because I'm not sure what the
> "[read-uncommited] can lead to inconsistent query results" phrase on the
> above page may imply.

Hi again, 

I  don't believe that it is deadlock that you run into when using shared cache 
mode. Likely you face that a table gets locked for reading, immediately after 
the first write. The regular locking model is more tolerant here because 
reading is still possible until a writer starts spilling changes to disk. The 
read_uncommitted pragma is in my opinion a brilliant solution. It is however 
not the only option. As deadlock likely no longer occurs, you can now just wait 
for any lock. Only this must be handled in the application, as the timeout 
setting is not observed in shared cache mode. See ticket 
http://www.sqlite.org/cvstrac/tktview?tn=2010. 

The inconsistency when reading uncommitted data applies to the number of rows. 
Not to the consistency of the data within a row. This is perception, which may 
be too optimistic..

Best regards, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reasons for SQLITE_BUSY

2009-07-30 Thread Edzard Pasma
--- nikol...@rath.org wrote:
> "Igor Tandetnik"  writes:
>> Nikolaus Rath  wrote:
>>> I am accessing the same database from different threads. Each thread
>>> has its own connection. I have set the busy timeout for each
>>> connection to 5000 milliseconds.
>>>
>>> However, in some testcases I still get SQLITE_BUSY errors from
>>> sqlite3_step. Moreover, the whole testcases run in much less than 5
>>> seconds, to apparently sqlite does not even try to wait for the lock
>>> to disappear.
>>
>> You are getting a deadlock. The scenario goes like this: thread A runs a 
>> transaction that starts as a reader (with a SELECT statement) but later 
>> becomes a writer (by executing INSERT, UPDATE or DELETE). Thread B also 
>> runs a transaction like this, or a simple writer transaction. Then the 
>> following sequence of events occurs:
>>
>> 1. Thread A starts as reader and takes a shared lock
>> 2. Thread B starts as writer, takes a pending lock and waits for readers 
>> to clear.
>> 3. Thread A tries to become a writer and promote its lock to reserved, 
>> but can't because there's already a writer on the database.
>>
>> The two threads deadlock. No amount of waiting by either thread would 
>> get them out of the impasse: the only way out is for one of the threads 
>> to roll back its transaction and start from scratch. When SQLite detects 
>> this situation, it returns SQLITE_BUSY immediately, without calling the 
>> busy handler (because, again, waiting won't help any).
>>
>> To avoid the possibility of a deadlock, start your reader-turning-writer 
>> transactions with BEGIN IMMEDIATE (this essentially makes the 
>> transaction a writer right away).
>
> Ah, I see. I expected that a deadlock would actually result in both
> threads hanging forever, rather than SQLite detecting it and abandoning
> immediately. The later is of course even better once you know about it.
> Thanks for the explanations! I should be able to fix my problem now..

Hi,

Just in case it appears difficult to fix, I like to suggest to try using shared 
cache mode. The shared cache locking model does not have this particular 
deadlock situation. I'm assuming that the database is accessed from within a 
single process only. 

Regards, Edzard
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Writers and Database is Locked Problems

2009-07-20 Thread Edzard Pasma
The "database is locked" error may occur on a select as well, in case a writer 
has spilt changes to disk. This is mentioned in 
http://sqlite.org/lockingv3.html. It would be SQLITE_BUSY and not 
SQLITE_LOCKED, as is the subject in the document you refer to. Is that 
possible? Then you'd need busy handling on the queries as well. (unless you 
choose to move everyting witin an exclusive transation).

In my understanding you can get away without any programmatical busy handling, 
if you just set a timeout on the connection (sqlite3_busy_timeout). The only 
concern is deadlocks, which can occur if two processes are holding a shared 
lock (due to an unfinished read) adn both want to start a transaction. That is 
why you better finish all read-cursors (sqlite3_reset) before starting updates. 

Edzard

> Hi.
> 
> Thanks for the suggestions. At the moment, im very new to using SQLite, 
> so I
> haven't done anything advanced with its usage in my program. I pretty 
> much
> read through the 5 minute intro using C/C++, looked at a few of the API
> docs, and then proceeded to use it from that information. So I've just
> modified the 5-minute intro to my needs, and used some of the other 
> API's,
> but nothing too advanced.
> 
> However, when searching for the problem I was receiving, I saw a
> trac/commit/wiki and page regarding the places that would cause locking
> issues. As far as I understand, I should be able to do a select on a 
> table
> that is currently being updated without causing any locking issues 
> right? As
> it currently stands, I did modify my program to deal with the 
> SQLITE_BUSY
> returns, and I see now that im getting database is locked problems when
> doing a select now. Without debugging the issue too deeply, im guessing 
> that
> the "Database is locked" problem is now being caused by a select while
> another program is updating the table. However, this shouldn't be an 
> issue
> anymore according to the page :
> http://www.sqlite.org/cvstrac/wiki?p=DatabaseIsLocked
> 
> If im wrong, or misunderstood or misread something, then please let me 
> know.
> Also if you have further suggestions as to getting around this problem, 
> I am
> happy to try anything.
> 
> Thanks for any further help.
> Regards
> /Cole
> 
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Edzard Pasma
> Sent: 19 July 2009 11:50 AM
> To: kennethinbox-sql...@yahoo.com; General Discussion of SQLite Database
> Cc: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Multiple Writers and Database is Locked Problems
> 
> I'd also make sure that all read-cursors are finished before starting a
> transaction. Or, take the exclusive lock already before the queries
> (possibly that is what was meant)
> 
> --- kennethinbox-sql...@yahoo.com wrote:
> 
>>From: Ken <kennethinbox-sql...@yahoo.com>
>>To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>>Subject: Re: [sqlite] Multiple Writers and Database is Locked Problems
>>Date: Sat, 18 Jul 2009 12:29:33 -0700 (PDT)
>>
>>
>>I like to start each of my transactions with a "Begin Immediate" 
>>that way the database file is locked at that point. And its
>>relatively simple to test for the DB locked at that stage and 
>>handle waiting or returning an error.
>>
>>HTH
>>
>>--- On Fri, 7/17/09, Cole <c...@opteqint.net> wrote:
>>
>>> From: Cole <c...@opteqint.net>
>>> Subject: [sqlite] Multiple Writers and Database is Locked Problems
>>> To: sqlite-users@sqlite.org
>>> Date: Friday, July 17, 2009, 6:38 AM
>>> Hi.
>>> 
>>> Im hoping someone might be able to help me with the
>>> problems im having, or
>>> suggest a better method of doing what im trying to achieve.
>>> Currently im
>>> using sqlite3 3.6.10. I don't mind updating or downgrading
>>> it if needed.
>>> 
>>> I have a program that I run multiple instances of. When
>>> they start, they
>>> parse the config file, and open a connection to each
>>> database that is
>>> listed. Each database only has a single table in it. They
>>> then receive
>>> requests, do a select on the database, parse the returned
>>> data, modify the
>>> data, then update the data to the database. However, I am
>>> running into the
>>> "database is locked" error when trying to update the
>>> database. I fully
>>> understand that 2 or more instances might be trying to
>>> update the same
>>> table, but is this the onl

Re: [sqlite] Multiple Writers and Database is Locked Problems

2009-07-19 Thread Edzard Pasma
I'd also make sure that all read-cursors are finished before starting a 
transaction. Or, take the exclusive lock already before the queries (possibly 
that is what was meant)

--- kennethinbox-sql...@yahoo.com wrote:

>From: Ken 
>To: General Discussion of SQLite Database 
>Subject: Re: [sqlite] Multiple Writers and Database is Locked Problems
>Date: Sat, 18 Jul 2009 12:29:33 -0700 (PDT)
>
>
>I like to start each of my transactions with a "Begin Immediate" 
>that way the database file is locked at that point. And its
>relatively simple to test for the DB locked at that stage and 
>handle waiting or returning an error.
>
>HTH
>
>--- On Fri, 7/17/09, Cole  wrote:
>
>> From: Cole 
>> Subject: [sqlite] Multiple Writers and Database is Locked Problems
>> To: sqlite-users@sqlite.org
>> Date: Friday, July 17, 2009, 6:38 AM
>> Hi.
>> 
>> Im hoping someone might be able to help me with the
>> problems im having, or
>> suggest a better method of doing what im trying to achieve.
>> Currently im
>> using sqlite3 3.6.10. I don't mind updating or downgrading
>> it if needed.
>> 
>> I have a program that I run multiple instances of. When
>> they start, they
>> parse the config file, and open a connection to each
>> database that is
>> listed. Each database only has a single table in it. They
>> then receive
>> requests, do a select on the database, parse the returned
>> data, modify the
>> data, then update the data to the database. However, I am
>> running into the
>> "database is locked" error when trying to update the
>> database. I fully
>> understand that 2 or more instances might be trying to
>> update the same
>> table, but is this the only reason this might happen now?
>> Or are there also
>> other scenarios where this might happen?
>> 
>> Ive searched the mailing list and I see that you mention
>> using random seed
>> and time to handle the SQLITE_BUSY return value, and then
>> trying to perform
>> the update again. I was perhaps wondering if there are any
>> other suggestions
>> for dealing with this scenario where there might be
>> multiple writers to the
>> same database at the same time?
>> 
>> Regards
>> /Cole
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Group_concat includes null if first in the group

2009-06-19 Thread Edzard Pasma
Hello, I found an issue in group_concat, that appears mainly cosmetical, 
however may be good to be aware of for those making a more creative use of this 
function. See the ticket http://www.sqlite.org/cvstrac/tktview?tn=3923. Edzard.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how can we solve IF EXIST in SQLite

2009-06-02 Thread Edzard Pasma
If prod_batch_code is not a unique key (which is surprising as you may be 
updating more than one row), we can still write a pseudo INSERT OR REPLACE in 
the form of both an update and an insert statement. The update can go 
unchanged. The insert should not use values () but a query that only yields a 
row if the update failed:

INSERT INTO stock_table (..)
SELECT new.purchase_id+2, new.prod_batch_code, new.purchase_qty, 
new.purchase_date
WHERE NOT EXISTS (
SELECT NULL
FROM stock_table
WHERE prod_batch_code = new.prod_batch_code);

--- paiva...@gmail.com wrote:

From: Pavel Ivanov <paiva...@gmail.com>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Subject: Re: [sqlite] how can we solve IF EXIST in SQLite
Date: Tue, 2 Jun 2009 09:01:08 -0400

If you have unique index on stock_tab.prod_batch_code then you can
re-write your trigger as this:

INSERT OR REPLACE INTO stock_tab
  (stock_id, prod_batch_code, stock_qty, stock_date)
SELECT new.purchase_id+2, new.prod_batch_code,
  new.purchase_qty + ifnull(b.stock_qty, 0), new.purchase_date
   FROM (SELECT new.prod_batch_code) a
 LEFT JOIN stock_tab b ON b.prod_batch_code = a.prod_batch_code


Pavel

On Tue, Jun 2, 2009 at 8:20 AM, robinsmathew <robinsmat...@hotmail.com> wrote:
>
> guys i ll clarify the problem
> this is the purchase table here purchase id is PK
>
> purchase_id  prod_batch_code  vendor_type_code  purchase_qty  purchase_date
> ---  ---    
> ---
> 1            1000             1             100             2009-05-26
> 18:19:27
> 2            1001             1             100             2009-05-26
> 18:19:31
> 3            1002             1             100             2009-05-26
> 18:19:35
> 4            1003             1             100             2009-05-26
> 18:19:49
>
> this is the stock table here stock_id is PK and prod_batch_code is FK
>
> stock_id    prod_batch_code  stock_qty   stock_date
> --  ---  --  ---
> 20001       1001             105         2009-05-26 18:19:27
> 20002       1002             100ps       2009-05-26 18:19:31
> 20003       1003             100ps       2009-05-26 18:19:35
> 20004       1003             100ps       2009-05-26 18:19:43
> 20005       1002             100ps       2009-05-26 18:19:44
> 20006       1001             100ps       2009-05-26 18:19:49
> 20007       1000             85          2009-05-26 18:19:50
> 20008       1000             85          2009-05-26 18:19:51
>
> i wrote a trigger
> CREATE TRIGGER insert_stock_from_product
>        AFTER INSERT ON purchase_tab
>        BEGIN
>        INSERT INTO stock_tab(stock_id, prod_batch_code, stock_qty, stock_date)
>        values (new.purchase_id+2, new.prod_batch_code, new.purchase_qty,
> new.purchase_date );
>        END;
>
> instead of inserting the same products repeatedly in the stock table i jus
> want the quantity as well as the dates to be updated . and wen i insert a
> new product_batch_code to the purchase table its shuld be inserted in the
> stock table also...
>
> Edzard Pasma wrote:
>>
>> Sorry, this was written down without testing. I see now that
>> prod_batch_code must be the primary key, instead of stock_id, for the
>> REPLACE to work as expected. Then some other expression must be used to
>> fill stock_id, e.g. IF_NULL (s.stock_id, 29). I also see that this
>> message crosses Kees Nuyt's idea which may be more comfortable if you like
>> to keep the SQL simple..
>> Edzard
>>
>> --- edz...@volcanomail.com wrote:
>>
>> From: "Edzard Pasma" <edz...@volcanomail.com>
>> To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
>> Cc: <sqlite-users@sqlite.org>
>> Subject: Re: [sqlite] how can we solve IF EXIST in SQLite
>> Date: Tue, 2 Jun 2009 04:19:33 -0700
>>
>> Hello, you are cleverer than you think. Your initial idea to use INSERT OR
>> REPLACE might look like:
>>
>> INSERT OR REPLACE INTO stock_tab(stock_id, prod_batch_code, stock_qty,
>> stock_date)
>> SELECT
>>     s.stock_id,
>>     p.prod_batch_code,
>>     IF_NULL (s.stock_qty, 0) + p.purchase_qty
>>     DATETIME('NOW')
>> FROM purchase_tab p
>> LEFT OUTER JOIN stock_tab s
>> ON s.prod_batch_code = p.prod_batch_code
>> WHERE p.product_batch_code=1000
>> /
>> (assuming stock_id PRIMARY KEY)
>>
>> Best regards, Edzard
>>
>> --- engelsch...@codeswift.com wrote:
>>
>> From: Martin Engelschalk <engelsch...@codeswift.com>
>> To: Gen

Re: [sqlite] how can we solve IF EXIST in SQLite

2009-06-02 Thread Edzard Pasma
Sorry, this was written down without testing. I see now that prod_batch_code 
must be the primary key, instead of stock_id, for the REPLACE to work as 
expected. Then some other expression must be used to fill stock_id, e.g. 
IF_NULL (s.stock_id, 29). I also see that this message crosses Kees Nuyt's 
idea which may be more comfortable if you like to keep the SQL simple..
Edzard

--- edz...@volcanomail.com wrote:

From: "Edzard Pasma" <edz...@volcanomail.com>
To: "General Discussion of SQLite Database" <sqlite-users@sqlite.org>
Cc: <sqlite-users@sqlite.org>
Subject: Re: [sqlite] how can we solve IF EXIST in SQLite
Date: Tue, 2 Jun 2009 04:19:33 -0700

Hello, you are cleverer than you think. Your initial idea to use INSERT OR 
REPLACE might look like:

INSERT OR REPLACE INTO stock_tab(stock_id, prod_batch_code, stock_qty, 
stock_date)
SELECT 
s.stock_id, 
p.prod_batch_code, 
IF_NULL (s.stock_qty, 0) + p.purchase_qty 
DATETIME('NOW')
FROM purchase_tab p
LEFT OUTER JOIN stock_tab s
ON s.prod_batch_code = p.prod_batch_code
WHERE p.product_batch_code=1000
/
(assuming stock_id PRIMARY KEY)

Best regards, Edzard

--- engelsch...@codeswift.com wrote:

From: Martin Engelschalk <engelsch...@codeswift.com>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Subject: Re: [sqlite] how can we solve IF EXIST in SQLite
Date: Tue, 02 Jun 2009 12:46:58 +0200

Hi,

as far as I know, you cannot do what you want to do in pure SQL. 
However, perhaps someone cleverer can contradict me.

You could first execute the update statement, check if there was a row 
which was updated using sqlite3_changes() (see 
http://www.sqlite.org/capi3ref.html#sqlite3_changes), and then issue the 
insert if there was none.

Martin

robinsmathew wrote:
> hey thanx for the reply... u leave the things happening inside.. wat i jus
> wanna do is i wanna insert a new row to a table
> the table will be like this
> stock_id PKproduct_id FK   quantitystock_date
> 1 10001028-05-2009
> 10001 1001  527-05-2009
>
> and wen i insert a new row with values  NULL,   1000,   15,30-05-2009 
>   
> i dont want want it as a new recorde i jus want to update the first row coz
> its also having the same product id i jus want set the quantity = 10+15 and
> the date new date that is 30-05-2009
> and suppose if i insert row with different product_id it should be inserted
> as it is..
>
> Martin Engelschalk wrote:
>   
>> Hi,
>>
>> what language is this? it certainly is not SQL or a "query".
>> I suspect that you can not use "insert or replace" (see 
>> http://www.sqlite.org/lang_insert.html), because you look first for a 
>> record with prod_batch_code=1000, and if you do not find it you insert 
>> one with prod_batch_code = 1003.
>> S,. it seems to me that you have to implement the logic in your
>> application.
>>
>> Martin
>>
>> robinsmathew wrote:
>> 
>>> hi am new to SQLite can anybody please tell me how this query can be
>>> solved
>>> in SQLite?
>>>
>>> IF EXISTS (SELECT prod_batch_code FROM stock_tab WHERE
>>> prod_batch_code=1000) 
>>> UPDATE stock_tab 
>>> SET stock_qty=stock_qty+(SELECT purchase_qty ROM purchase_tab
>>> WHERE
>>> oduct_batch_code=1000 ) 
>>> WHERE prod_batch_code=1000
>>> ELSE 
>>> INSERT INTO stock_tab(stock_id, prod_batch_code, stock_qty,
>>> stock_date) values (20009, 1003, 200,  
>>> DATETIME('NOW') );
>>>   
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>> 
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how can we solve IF EXIST in SQLite

2009-06-02 Thread Edzard Pasma
Hello, you are cleverer than you think. Your initial idea to use INSERT OR 
REPLACE might look like:

INSERT OR REPLACE INTO stock_tab(stock_id, prod_batch_code, stock_qty, 
stock_date)
SELECT 
s.stock_id, 
p.prod_batch_code, 
IF_NULL (s.stock_qty, 0) + p.purchase_qty 
DATETIME('NOW')
FROM purchase_tab p
LEFT OUTER JOIN stock_tab s
ON s.prod_batch_code = p.prod_batch_code
WHERE p.product_batch_code=1000
/
(assuming stock_id PRIMARY KEY)

Best regards, Edzard

--- engelsch...@codeswift.com wrote:

From: Martin Engelschalk 
To: General Discussion of SQLite Database 
Subject: Re: [sqlite] how can we solve IF EXIST in SQLite
Date: Tue, 02 Jun 2009 12:46:58 +0200

Hi,

as far as I know, you cannot do what you want to do in pure SQL. 
However, perhaps someone cleverer can contradict me.

You could first execute the update statement, check if there was a row 
which was updated using sqlite3_changes() (see 
http://www.sqlite.org/capi3ref.html#sqlite3_changes), and then issue the 
insert if there was none.

Martin

robinsmathew wrote:
> hey thanx for the reply... u leave the things happening inside.. wat i jus
> wanna do is i wanna insert a new row to a table
> the table will be like this
> stock_id PKproduct_id FK   quantitystock_date
> 1 10001028-05-2009
> 10001 1001  527-05-2009
>
> and wen i insert a new row with values  NULL,   1000,   15,30-05-2009 
>   
> i dont want want it as a new recorde i jus want to update the first row coz
> its also having the same product id i jus want set the quantity = 10+15 and
> the date new date that is 30-05-2009
> and suppose if i insert row with different product_id it should be inserted
> as it is..
>
> Martin Engelschalk wrote:
>   
>> Hi,
>>
>> what language is this? it certainly is not SQL or a "query".
>> I suspect that you can not use "insert or replace" (see 
>> http://www.sqlite.org/lang_insert.html), because you look first for a 
>> record with prod_batch_code=1000, and if you do not find it you insert 
>> one with prod_batch_code = 1003.
>> S,. it seems to me that you have to implement the logic in your
>> application.
>>
>> Martin
>>
>> robinsmathew wrote:
>> 
>>> hi am new to SQLite can anybody please tell me how this query can be
>>> solved
>>> in SQLite?
>>>
>>> IF EXISTS (SELECT prod_batch_code FROM stock_tab WHERE
>>> prod_batch_code=1000) 
>>> UPDATE stock_tab 
>>> SET stock_qty=stock_qty+(SELECT purchase_qty ROM purchase_tab
>>> WHERE
>>> oduct_batch_code=1000 ) 
>>> WHERE prod_batch_code=1000
>>> ELSE 
>>> INSERT INTO stock_tab(stock_id, prod_batch_code, stock_qty,
>>> stock_date) values (20009, 1003, 200,  
>>> DATETIME('NOW') );
>>>   
>>>   
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>> 
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Shared Cache unlock notification to multiple threads

2009-03-23 Thread Edzard Pasma
Hello,

The new sqlite3_unlock_notify API, described in 
http://www.sqlite.org/draft/c3ref/unlock_notify.html and expected in SQLite 
3.6.12, may have a restriction for use when multiple threads share the same 
connection. The documents states:

""" There may be at most one unlock-notify callback registered by a blocked 
connection. If sqlite3_unlock_notify() is called when the blocked connection 
already has a registered unlock-notify callback, then the new callback replaces 
the old. """

I expect the following goes wrong then:

Connection A: UPDATE t1 SET ..

Connection B, thread 1: SELECT * FROM t1  --> database table locked

Connection B, thread 2: SELECT * FROM t1  --> database table locked

If both threads register to be notified, only one will get called when the lock 
is cleared. The other is forgotten?

I would naively propose to keep any unlock-notify callback registered, also if 
for the same connection. But would not wish to complicate things. It is a 
somewhat crazy case and may as well be dealt with in the application that 
wishes to support it.

Thanks for this most interesting development,

Edzard Pasma
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] having the Top N for each group

2009-02-15 Thread Edzard Pasma
Hello,

Your current solution is theoretically not optimal, as it evaluates a sub-query 
for each row in table T, whereas a construction with LIMIT does this only once 
for each group. If you wish I may look at the 'infinite' query, just mail it. 
Otherwise we at least have proved SQLite's incredible speed in doing UPDATE :)

Edzard Pasma  

--- sylvain.point...@gmail.com wrote:

From: Sylvain Pointeau <sylvain.point...@gmail.com>
To: edz...@volcanomail.com, General Discussion of SQLite Database 
<sqlite-users@sqlite.org>
Subject: Re: [sqlite] having the Top N for each group
Date: Sun, 15 Feb 2009 21:44:58 +0100

Hello,
on my large dataset, it tooks an infinite time.
I finished with :

update T
set ranknum = (select count(*) from T a where ... a.value >= T.value  )

and it works fast enough, in few minutes.

if you have better solution, I would be glad to change.

Cheers,
Sylvain

On Sun, Feb 15, 2009 at 10:06 AM, Edzard Pasma <edz...@volcanomail.com>wrote:

> Hello again,
>
> The following solution is more elegant than my earlier group_cancat idea,
> and is just as fast. I had not expected that as it seems what you started
> with.
>
> select period.period, sales.product
> from period
> join sales on sales.rowid in (
>select rowid
>from sales
>where sales.period = period.period
>    order by sales.qty desc
>limit 3);
>
> -- Edzard Pasma
>
>
> --- sylvain.point...@gmail.com wrote:
>
> From: Sylvain Pointeau <sylvain.point...@gmail.com>
> To: sqlite-users@sqlite.org
> Subject: [sqlite] having the Top N for each group
> Date: Sat, 14 Feb 2009 09:21:15 +0100
>
> Hello all,
> I am wondering if we have a method faster then the INNER JOIN which
> can be very slow in case of large number of rows, which is my case.
> I was thinking of a UDF that increment a number if the concatenation of the
> key column (or group columns) is the same, means:
> select col1, col2, udf_topN(col1||col2) from TTT order by value group by
> col1,col2
>
> will result into
>
> 1,1,1
> 1,1,2
> 1,1,3
> 2,1,1
> 2,1,2
> 2,1,3
> 4,3,1
> 4,3,2
> etc
>
>
> however I don't really find how to keep, initialize, and destroy a variable
> in a UDF for a query time execution
>
> do you have some idea?
> is a TopN function planned for the future version of sqlite?
>
> Many thanks,
> Sylvain
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] having the Top N for each group

2009-02-15 Thread Edzard Pasma
Hello again,

The following solution is more elegant than my earlier group_cancat idea, and 
is just as fast. I had not expected that as it seems what you started with.

select period.period, sales.product
from period 
join sales on sales.rowid in (
select rowid
from sales
where sales.period = period.period
order by sales.qty desc
limit 3);

-- Edzard Pasma


--- sylvain.point...@gmail.com wrote:

From: Sylvain Pointeau <sylvain.point...@gmail.com>
To: sqlite-users@sqlite.org
Subject: [sqlite] having the Top N for each group
Date: Sat, 14 Feb 2009 09:21:15 +0100

Hello all,
I am wondering if we have a method faster then the INNER JOIN which
can be very slow in case of large number of rows, which is my case.
I was thinking of a UDF that increment a number if the concatenation of the
key column (or group columns) is the same, means:
select col1, col2, udf_topN(col1||col2) from TTT order by value group by
col1,col2

will result into

1,1,1
1,1,2
1,1,3
2,1,1
2,1,2
2,1,3
4,3,1
4,3,2
etc


however I don't really find how to keep, initialize, and destroy a variable
in a UDF for a query time execution

do you have some idea?
is a TopN function planned for the future version of sqlite?

Many thanks,
Sylvain
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] having the Top N for each group

2009-02-14 Thread Edzard Pasma
Hello, 

May be this is some idea:
GROUP_CONCAT is a built-in aggregate function, that efficiently returns a list 
(as text) of items in each group. If you add ORDER By (before the group by) it 
also arranges the ordering. But it does not let you restrict the number of 
elements in each group, to only the top-N. I thought may be SUBSTR is a further 
solution. The use of LIMIT would be more elegant but I don't see how.

is a TopN function planned for the future version of sqlite?
?

Edzard Pasma

--- sylvain.point...@gmail.com wrote:

From: Sylvain Pointeau <sylvain.point...@gmail.com>
To: sqlite-users@sqlite.org
Subject: [sqlite] having the Top N for each group
Date: Sat, 14 Feb 2009 09:21:15 +0100

Hello all,
I am wondering if we have a method faster then the INNER JOIN which
can be very slow in case of large number of rows, which is my case.
I was thinking of a UDF that increment a number if the concatenation of the
key column (or group columns) is the same, means:
select col1, col2, udf_topN(col1||col2) from TTT order by value group by
col1,col2

will result into

1,1,1
1,1,2
1,1,3
2,1,1
2,1,2
2,1,3
4,3,1
4,3,2
etc


however I don't really find how to keep, initialize, and destroy a variable
in a UDF for a query time execution

do you have some idea?
is a TopN function planned for the future version of sqlite?

Many thanks,
Sylvain
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] selecting the top 3 in a group

2009-01-08 Thread Edzard Pasma
--- robert.ci...@gmail.com wrote:

> I am still curious to know if there is a purely SQL way to do the same.

This can be achieved using group_concat:

select div,
rtrim (substr (s, 1, 10)) nr1,
rtrim (substr (s, 1, 10)) nr2,
rtrim (substr (s, 1, 10)) nr3
from (
select div, group_concat (substr (team || '  ', 1, 10), '') AS s
from (
select div, team
from teams
order by div, wins+0 desc)
group by div);

Don't believe this is ANSI SQL though.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reporting number of changes

2008-12-11 Thread Edzard Pasma
--- [EMAIL PROTECTED] wrote:
> Your approach only works in simple cases.  The number of changes is a 
> connection/sqlite3* wide number - ie any SQLite statements associated with it 
> can cause changes.  This would certainly be the case when multi-threading is 
> used.  

Yes, agreed

> Even in single threading, if you have two statements running at the same time 
> (eg you are reading rows from one to feed to the other or something something 
> similar) then the completion order will affect that changes counter.

This case seems alright, consider the following scheme where a query overlaps 
two update statements.

sqlite3old real
CURSOR OPERATION  total_changes total_changes changes
1   execute query   
1   fetch row 1   
2   execute update  1   0   1   
1   fetch row 2 
2   execute update  2   1   1
1   end of iteration2   2   0

The real changes are reflected correctly. 

> By far a better approach would be to enter a ticket requesting that the 
> sqlite3_stmt_status api include row change counters.  That way the numbers 
> will be completely unambiguous and unaffected by other statements that are 
> executing.

http://www.sqlite.org/cvstrac/tktview?tn=3534
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reporting number of changes

2008-12-10 Thread Edzard Pasma
Possibly my previous post was too short. Any way I have a satisfactory solution 
now. My question araised when trying to change the apswtrace tool available to 
Python users. This gathers execution times via a sqlite3_profile callback 
function. When I call sqlite3_changes from this profiler function, and just 
blindly display the results on the summary report, this looks like:

CALLS   TIMEROWSSQL
1   .00212  DELETE FROM t1
1   .00212  SELECT * FROM t1

Which is wrong as the SELECT statement just repeats the number of rows of the 
last DML (DELETE). 

My solution is to use sqlite3_total_changes instead of sqlite3_changes, more or 
less like this:
realchanges=sqlite3_total_changes()-old_total_changes
if realchanges>0:
old_total_changes+=realchanges


It appears satisfactory so far. Still wonder why a work-around like this is 
needed.  

Edzard Pasma


--- [EMAIL PROTECTED] wrote:

From: "Edzard Pasma" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Subject: [sqlite] reporting number of changes
Date: Tue, 9 Dec 2008 01:36:39 -0800

Hello,

The API function sqlite_changes reports the number of rows changed in the last 
update/insert/delete. I'd like to use this in a generic SQL-tracing tool and 
find it a bit inconvenient as the value can only be used if the statement was 
an update/insert/delete. Is there a reason that the value is not reset for 
other type of statements? Or is there an easy way to find if a statement was an 
update?

Edzard Pasma
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] reporting number of changes

2008-12-09 Thread Edzard Pasma
Hello,

The API function sqlite_changes reports the number of rows changed in the last 
update/insert/delete. I'd like to use this in a generic SQL-tracing tool and 
find it a bit inconvenient as the value can only be used if the statement was 
an update/insert/delete. Is there a reason that the value is not reset for 
other type of statements? Or is there an easy way to find if a statement was an 
update?

Edzard Pasma
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Rounding is not so easy

2005-09-26 Thread Edzard Pasma
Hello,

I saw that ROUND (0) is not equal to 0. It took some time to understand this. 
In the first place it is that the outcome of ROUND is always of type text. That 
is necessarily so, as a decimally rounded number has no exact binary 
representaion. For instance 1.23 is really 1.22 So text is more 
correct. The second thing to understand was that when comparing two 
expressions, sqlite does not try any data conversion. This way '0' is never 
equal to 0.

Knowing this, you can just use CAST (ROUND (...) AS NUMBER) for numerical 
expressions. 

I am mainly interested in speed, and found still an alternative not using ROUND 
() at all. For instance to round a number x to 1 decimal:

CAST (x * 10. - 0.5 + (x >= 0) AS INT) / 10.

This gives a factor 2 improvement compared to CAST (ROUND (x) AS NUMBER). But 
sqlite is so fast that you need to repeat it 100.000 times before noticing any 
wait time at all.

May this be something for a new built-in function?

Thanks, Edzard Pasma


_
Tired of spam and viruses? Get a VolcanoMail account with SpamShield Pro and 
Anti-Virus technology! http://www.volcanomail.com/


[sqlite] group by error in 3.2.6?

2005-09-20 Thread Edzard Pasma
There is a difference in the behaviour of GROUP BY in version 3.2.6. If you 
group by a column that contains NULL values, each NULL value comes on a 
seperate output line. Conceptually not so bad, if we read NULL as "unknown" and 
not as "empty". But I guess it is an error.

For the rest I'm very happy with this version. Thanks, Edzard Pasma


_
Tired of spam and viruses? Get a VolcanoMail account with SpamShield Pro and 
Anti-Virus technology! http://www.volcanomail.com/


[sqlite] Unlucky number for the ROUND function

2005-08-30 Thread Edzard Pasma
I found a number where the ROUND () function goes wrong:

SQLite version 3.2.5
Enter ".help" for instructions
sqlite> select round (9.95, 1);
:.0 

Sorry, it must be that I have bad luck today

Edzard Pasma

_
Tired of spam and viruses? Get a VolcanoMail account with SpamShield Pro and 
Anti-Virus technology! http://www.volcanomail.com/