Re: [sqlite] How often is xDisconnect called? (Was: Suggestion for syntax enhancement for virtual tables)

2015-01-05 Thread Dan Kennedy

On 01/06/2015 11:59 AM, Peter Aronson wrote:
It's this comment that makes me worry that xDisconnect can be called 
at other times than detach or close:


** When an in-memory Table object is deleted (for example when the
** schema is being reloaded for some reason), the VTable objects are not
** deleted and the sqlite3_vtab* handles are not xDisconnect()ed
** immediately. Instead, they are moved from the Table.pVTable list to
** another linked list headed by the sqlite3.pDisconnect member of the
** corresponding sqlite3 structure. They are then deleted/xDisconnected
** next time a statement is prepared using said sqlite3*. This is done
** to avoid deadlock issues involving multiple sqlite3.mutex mutexes.

I'm not sure exactly what this means, but it implies that xDisconnect 
can be called in the middle of a session.


It can. One scenario is if you ROLLBACK a transaction that includes 
schema modifications to the temp database. i.e. executing:


  BEGIN;
CREATE TEMP TABLE t2(x);
  ROLLBACK;

will cause the xDisconnect() method of all virtual tables in the temp 
database to be invoked. New sqlite3_vtab objects will be requested via 
xConnect() the next time the virtual table is accessed.


Dan.






Peter

On 1/2/2015 3:00 PM, Peter Aronson wrote:
If only the xDisconnect method is called on a virtual table create in 
the temp database at disconnect time, is that the only time 
xDisconnect will be called?  The documentation at sqlite.org doesn't 
seem to say.  Jay Krebich's Using SQLite says xDisconnect is "Called 
when a database containing a virtual table instance is detached or 
closed. Called once for each table instance."  But looking at the 
SQLite code and comments, I'm not sure this is true.  Is it?  If so, 
it would be easy enough when writing a Virtual Table Module to note 
that it is being created in the temp database, and do any required 
cleanup in xDisconnect instead of xDestroy for that instance.  But if 
xDisconnect can be called at other times, cleanup could be premature.


Best,

Peter


On Friday, January 2, 2015 12:56 AM, Dan Kennedy 
<danielk1...@gmail.com> wrote:




On 01/02/2015 01:58 PM, Hick Gunter wrote:
Temporary virtual tables sounds like an interesting concept. Does 
the xDestroy() function get called on such a beast (as opposed to 
xDisconnect() when the connection is closed)?

Just xDisconnect().

Dan.



   Should that function delete the backing store (even if a 
non-temporary virtual table is still connected)?


-Ursprüngliche Nachricht-
Von: Baruch Burstein [mailto:bmburst...@gmail.com]
Gesendet: Donnerstag, 01. Jänner 2015 08:38
An: General Discussion of SQLite Database
Betreff: [sqlite] Suggestion for syntax enhancement for virtual tables

For creating temporary virtual tables, currently you need to do:

CREATE VIRTUAL TABLE temp.t ...

Can this syntax be made to work too (similar to creating regular 
tables)?


CREATE VIRTUAL TEMP TABLE t ...
or
CREATE TEMP VIRTUAL TABLE t ...

--
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı 
___

sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Dan Kennedy

On 01/05/2015 06:22 PM, Simon Slavin wrote:

I have a database file which is 120GB in size.  It consists of two huge tables 
and an index.
Its journal_mode is DELETE.

It is on a partition with 803GB of free space.  By my calculations I have 6.7 
times the amount of free space as the database is taking up.

I use the SQLite shell tool version 3.7.9 to run VACUUM on it.  The Shell tool 
bails out reporting

CPU Time: user 2113.596836 sys 437.660032
Error: near line 5 : database or disk full.

My understanding is that VACUUM can't take more than three times the current 
size of the database file.  What does the above error mean under these 
circumstances ?


Probably running out of space wherever temp files are created.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New column in select will not mask column of the same name in having clause and sqlite won't warn

2015-01-04 Thread Dan Kennedy

On 01/05/2015 02:52 PM, Dan Kennedy wrote:

On 01/05/2015 01:39 PM, Hick Gunter wrote:

This is completely legal and well defined.

HAVING is applied to the RESULT set of a SELECT.

The select asks to count the "distinct kontrola" in each group of 
kvadrat and datum, the HAVING clause specifies returning only those 
records with pocet > 1.


If there were no pocet column in table b, this would return only the 
non-empty groups, which is what the OP intended.


As there is a pocet column in table b, the HAVING clause refers to 
the original b.pocet which contains a (from the POV of the 
programmer) "randomly selected from the group" rows' value. This is a 
documented SQLite feature.


SELECT kvadrat, datum, counted_pocet as pocet from (SELECT kvadrat, 
datum ,pocet, count(distinct kontrola) as counted_pocet from b group 
by kvadrat, datum HAVING pocet > 1);


SQLite prefers regular column references to aliases. For example:

  $ ./sqlite3
  SQLite version 3.8.8 2015-01-03 18:59:17
  sqlite> CREATE TABLE t1(a, b, c);
  sqlite> INSERT INTO t1 VALUES(1, 1, 1);
  sqlite> INSERT INTO t1 VALUES(2, 2, 2);
  sqlite> INSERT INTO t1 VALUES(3, 3, 3);

then:

  sqlite> SELECT a, b+1 AS c FROM t1 WHERE c=2;
  2|3
  sqlite> SELECT a, b+1 AS d FROM t1 WHERE d=2;
  1|2

In the first SELECT, the "c" in the WHERE clause still refers to 
column "c", despite the alias. In the second, "d" is an alias for 
"b+1". Or:


  sqlite> SELECT a, b*-1 AS c FROM t1 ORDER BY c ASC;
  3|-3
  2|-2
  1|-1

In the above, the "c" in the ORDER BY refers to (b*-1), not the column 
"c".


MySQL and Postgres do the same thing for the ORDER BY example. Other 
databases do not allow column aliases to be referred to from within 
WHERE clauses.


Is this actually documented anywhere? That original names trump aliases?

So I guess the same thing is happening in the HAVING clause. The 
column "pocet" is taking precedence over the alias "pocet".


MySQL does not support ungrouped columns in a GROUP BY clause. So it 
picks out the "pocet" alias and the query behaves as desired. Postgres 
does not support aliases in the GROUP BY clause, so the situation 
doesn't come up. But MySQL does agree with SQLite for the following:



In the above paragraph, read "HAVING" for "GROUP BY"





  $ ./sqlite3
  sqlite> CREATE TABLE t1(a INTEGER PRIMARY KEY, b INTEGER, c INTEGER);
  sqlite> INSERT INTO t1 VALUES(1, 1, 1), (2, 2, 2), (3, 1, 3);
  sqlite> SELECT count(*) AS b FROM t1 GROUP BY b HAVING b=2;
  1
  sqlite> SELECT count(*) AS d FROM t1 GROUP BY b HAVING d=2;
  2

Showing that given a choice, MySQL picks an original column over an 
alias within the HAVING clause as well.


So, I guess, not a bug...

Dan.



















-Ursprüngliche Nachricht-
Von: James K. Lowden [mailto:jklow...@schemamania.org]
Gesendet: Samstag, 03. Jänner 2015 00:45
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] New column in select will not mask column of 
the same name in having clause and sqlite won't warn


On Sun, 28 Dec 2014 17:46:08 +0100
Tomas Telensky <tomas.telen...@gmail.com> wrote:


select kvadrat, datum, count(distinct kontrola) as pocet from b group
by kvadrat, datum having pocet > 1

The problem was that pocet was actually a column in table b and I
didn't notice, and the having clause was using the table column
instead of the newly derived column specified in select clause.

So far so good, but sqlite should at least issue any warning, right?
I would say it should raise an error.  The HAVING clause should 
include at least one aggregate.  Comparing a column to a constant is 
the job of WHERE.


The accepted syntax is ambiguous.  Was the HAVING applied before or 
after the aggregation.  IOW, did you get


1.  the count for each {kvadrat, datum} pair for which pocet > 1, or 
2.  the count of {kvadrat, datum} pairs that have at least one pocet > 1


?

In the first case the counts would be smaller by the number of rows 
for which pocet <= 1.  In the second case results rows would be 
eliminated for pairs that contain only rows for which pocet <= 1.


--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New column in select will not mask column of the same name in having clause and sqlite won't warn

2015-01-04 Thread Dan Kennedy

On 01/05/2015 01:39 PM, Hick Gunter wrote:

This is completely legal and well defined.

HAVING is applied to the RESULT set of a SELECT.

The select asks to count the "distinct kontrola" in each group of kvadrat and 
datum, the HAVING clause specifies returning only those records with pocet > 1.

If there were no pocet column in table b, this would return only the non-empty 
groups, which is what the OP intended.

As there is a pocet column in table b, the HAVING clause refers to the original b.pocet 
which contains a (from the POV of the programmer) "randomly selected from the 
group" rows' value. This is a documented SQLite feature.

SELECT kvadrat, datum, counted_pocet as pocet from (SELECT kvadrat, datum ,pocet, 
count(distinct kontrola) as counted_pocet from b group by kvadrat, datum HAVING 
pocet > 1);


SQLite prefers regular column references to aliases. For example:

  $ ./sqlite3
  SQLite version 3.8.8 2015-01-03 18:59:17
  sqlite> CREATE TABLE t1(a, b, c);
  sqlite> INSERT INTO t1 VALUES(1, 1, 1);
  sqlite> INSERT INTO t1 VALUES(2, 2, 2);
  sqlite> INSERT INTO t1 VALUES(3, 3, 3);

then:

  sqlite> SELECT a, b+1 AS c FROM t1 WHERE c=2;
  2|3
  sqlite> SELECT a, b+1 AS d FROM t1 WHERE d=2;
  1|2

In the first SELECT, the "c" in the WHERE clause still refers to column 
"c", despite the alias. In the second, "d" is an alias for "b+1". Or:


  sqlite> SELECT a, b*-1 AS c FROM t1 ORDER BY c ASC;
  3|-3
  2|-2
  1|-1

In the above, the "c" in the ORDER BY refers to (b*-1), not the column "c".

MySQL and Postgres do the same thing for the ORDER BY example. Other 
databases do not allow column aliases to be referred to from within 
WHERE clauses.


Is this actually documented anywhere? That original names trump aliases?

So I guess the same thing is happening in the HAVING clause. The column 
"pocet" is taking precedence over the alias "pocet".


MySQL does not support ungrouped columns in a GROUP BY clause. So it 
picks out the "pocet" alias and the query behaves as desired. Postgres 
does not support aliases in the GROUP BY clause, so the situation 
doesn't come up. But MySQL does agree with SQLite for the following:


  $ ./sqlite3
  sqlite> CREATE TABLE t1(a INTEGER PRIMARY KEY, b INTEGER, c INTEGER);
  sqlite> INSERT INTO t1 VALUES(1, 1, 1), (2, 2, 2), (3, 1, 3);
  sqlite> SELECT count(*) AS b FROM t1 GROUP BY b HAVING b=2;
  1
  sqlite> SELECT count(*) AS d FROM t1 GROUP BY b HAVING d=2;
  2

Showing that given a choice, MySQL picks an original column over an 
alias within the HAVING clause as well.


So, I guess, not a bug...

Dan.



















-Ursprüngliche Nachricht-
Von: James K. Lowden [mailto:jklow...@schemamania.org]
Gesendet: Samstag, 03. Jänner 2015 00:45
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] New column in select will not mask column of the same 
name in having clause and sqlite won't warn

On Sun, 28 Dec 2014 17:46:08 +0100
Tomas Telensky <tomas.telen...@gmail.com> wrote:


select kvadrat, datum, count(distinct kontrola) as pocet from b group
by kvadrat, datum having pocet > 1

The problem was that pocet was actually a column in table b and I
didn't notice, and the having clause was using the table column
instead of the newly derived column specified in select clause.

So far so good, but sqlite should at least issue any warning, right?

I would say it should raise an error.  The HAVING clause should include at 
least one aggregate.  Comparing a column to a constant is the job of WHERE.

The accepted syntax is ambiguous.  Was the HAVING applied before or after the 
aggregation.  IOW, did you get

1.  the count for each {kvadrat, datum} pair for which pocet > 1, or 2.  the count 
of {kvadrat, datum} pairs that have at least one pocet > 1

?

In the first case the counts would be smaller by the number of rows for which pocet 
<= 1.  In the second case results rows would be eliminated for pairs that contain 
only rows for which pocet <= 1.

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting SQLITE_IOERR_WRITE when running sqlite

2015-01-02 Thread Dan Kennedy

On 01/02/2015 04:44 PM, Waiba, Aswin wrote:

Hi,

I am currently using sqlite version 3.7.14 in our application. We are using it 
via a single thread, however we are getting SQLITE_IOERR (10) when running the 
application. After enabling the extended result code, we found out that we were 
getting SQLITE_IOERR_WRITE (778). After going through the sqlite code for 
version 3.7.14 I could see that the error was being thrown from unixWrite() and 
unixFileControl() (as the application is deployed on Unix). However we are 
unsure why the error is being thrown from those places.
Has anyone got these kind of errors before and if yes, how was it solved. Any 
feedback will be of great help.


It means a call to write(), pwrite(), fallocate() or similar has failed. 
Because it ran out of disk space, or the media was removed or perhaps is 
faulty. Or a bug in SQLite might be causing invalid parameters to be 
passed to one of these system calls.


Running under [strace] or equivalent might help to figure out why the 
system call is failing.


Dan.






Thanks and Regards
Aswin Waiba




===
Please access the attached hyperlink for an important electronic communications 
disclaimer:
http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
===
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Suggestion for syntax enhancement for virtual tables

2015-01-01 Thread Dan Kennedy

On 01/02/2015 01:58 PM, Hick Gunter wrote:

Temporary virtual tables sounds like an interesting concept. Does the 
xDestroy() function get called on such a beast (as opposed to xDisconnect() 
when the connection is closed)?


Just xDisconnect().

Dan.




  Should that function delete the backing store (even if a non-temporary 
virtual table is still connected)?

-Ursprüngliche Nachricht-
Von: Baruch Burstein [mailto:bmburst...@gmail.com]
Gesendet: Donnerstag, 01. Jänner 2015 08:38
An: General Discussion of SQLite Database
Betreff: [sqlite] Suggestion for syntax enhancement for virtual tables

For creating temporary virtual tables, currently you need to do:

CREATE VIRTUAL TABLE temp.t ...

Can this syntax be made to work too (similar to creating regular tables)?

CREATE VIRTUAL TEMP TABLE t ...
or
CREATE TEMP VIRTUAL TABLE t ...

--
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] journal file is not removed when ATOMIC WRITE is enabled

2014-12-29 Thread Dan Kennedy

On 12/29/2014 07:57 AM, Yongil Jang wrote:

For more information,

In pager_end_transaction() function,

int bDelete = (!pPager->tempFile &&
sqlite3JournalExists(pPager->jfd)); <-- sqlite3JournalExists() returns
0

I think both of pager_end_transaction() and sqlite3JournalExists()
functions work properly in this scenario.
However, could it(two files are exist at the same time) make a problem?


It's a curious situation, but should not cause a problem.

When a new connection reads from the db for the first time, it will open 
and read the journal file in order to determine that it is not a 
hot-journal, then close it and proceed to open the db in wal mode. Once 
the db has been opened in wal mode, the cold journal file will be 
ignored completely.


So the net effect will be a slight overhead when a connection opens its 
first read transaction on the db.


Dan.








2014-12-29 9:40 GMT+09:00 Yongil Jang <yongilj...@gmail.com>:

Dear developers,

Please, look at following instructions.

1) Add SQLITE_IOCAP_ATOMIC or SQLITE_IOCAP_ATOMIC4K flags to
unixDeviceCharacteristics() function (or any OS related functions)
2) Add SQLITE_ENABLE_ATOMIC_WRITE to compile option
3) Compile
4) run sqlite3 -  sqlite3 test.db
5) sqlite> pragma journal_mode=persist;
6) sqlite> create table a(id);
7) sqlite> pragma journal_mode=wal;
8) sqlite> insert into a values (1);

With these instructions, 'test.db-journal' and 'test.db-wal' are
exists at same time.

Regards,
Yongil Jang.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS4 Problem

2014-12-24 Thread Dan Kennedy

On 12/25/2014 08:04 AM, Peter Truskier wrote:

As I mentioned, I seem to have solved the problem by doing a "rebuild" command 
on the FTS4 table. But, as I thought about it further, I'm still confused as to why 
dropping, and then re-creating the virtual table didn't solve the problem as well.

What am I missing?


Creating an FTS4 table that uses the "content=" option does not 
automatically populate the FTS index. It just creates an empty FTS index 
that SQLite assumes the user will somehow take care of populating.


Dan.









Thanks,

Peter



On Dec 24, 2014, at 12:03 PM, Peter Truskier <ptrusk...@gmail.com> wrote:

Thanks so much for the quick AND HELPFUL response!

I had run the pragma, but was unaware of the command. When I ran the 'integrity-check' 
command on the virtual table, I got a "database disk image is malformed Error Code 
11"

Running the 'rebuild' command seems to have fixed the problem. I guess one of 
the triggers for keeping the virtual table up to date must have failed for some 
reason.

Thank you again!

--
Peter Truskier
Berkeley, CA USA
1-510-495-6442




On Dec 24, 2014, at 11:47 AM, Richard Hipp <d...@sqlite.org> wrote:

Have you run integrity checks on the database file (
https://www.sqlite.org/pragma.html#pragma_integrity_check) and on the FTS4
tables (https://www.sqlite.org/fts3.html#integcheck)?  Do they all look
correct?

On Wed, Dec 24, 2014 at 2:40 PM, Peter Truskier <ptrusk...@gmail.com> wrote:


I have an sqlite database in which I've created a virtual table using
FTS4. For nearly a year, we've been using it to do full text searching with
no problem.

The database contains a table of products (tblProducts) with columns id,
SKU, itemDesc, etc.

The virtual table is created like this:

  CREATE VIRTUAL TABLE tblFTSProducts USING fts4(content="tblProducts",
SKU, itemDesc)

A couple of days ago, full text searches (using "MATCH") suddenly stopped
working - always returning an empty recordset with no error. The data in
the virtual table appears to be correct.

If I do a query on the virtual table like this:

  SELECT * FROM tblFTSProducts WHERE itemDesc LIKE '%inches%',

I get a valid recordset containing the expected records. But, if I do this:

  SELECT * FROM tblFTSProducts WHERE tblFTSProducts MATCH 'inches',

I get an empty recordset.

I've checked the integrity of the database, and it is reported to be good.
I've tried dropping and re-creating the virtual table, and still get the
same behavior.

Does anyone have any suggestion for what might suddenly cause this
behavior after working for moths and months?

Thanks, and happy holidays!



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE wal file size keeps growing

2014-12-18 Thread Dan Kennedy

On 12/19/2014 11:22 AM, Kushagradhi Bhowmik wrote:

I am writing continuously into a db file which has PRAGMA journal_mode=WAL,
PRAGMA journal_size_limit=0. My C++ program has two threads, one
reader(queries at 15 sec intervals) and one writer(inserts at 5 sec
intervals).

Every 3 min I am pausing insertion to run a sqlite3_wal_checkpoint_v2()
from the writer thread with the mode parameter as
SQLITE_CHECKPOINT_RESTART. To ensure that no active read operations are
going on at this point, I set a flag that checkpointing is about to take
place and wait for reader to complete (the connection is still open) before
running checkpoint. After checkpoint completion I again indicate to readers
it is okay to resume querying.


It shouldn't hurt, but you should not have to manage the readers that 
way. SQLITE_CHECKPOINT_RESTART should wait on readers as required to 
ensure that the next writer can write into the start of the wal file 
instead of appending. If SQLITE_CHECKPOINT_RESTART returns SQLITE_OK, 
the next writer should be able to restart the wal file.


If you register an sqlite3_wal_hook() callback it will be invoked to 
report the size of the wal file after each write transaction. Logging 
this information along with the checkpoint attempts and return codes 
might help to shed light on the problem.


Dan.







sqlite3_wal_checkpoint_v2() returns SQLITE_OK, and pnLog and Ckpt as
equal(around 4000), indicating complete wal file has been synced with main
db file. So next write should start from beginning according to
documentation. However, this does not seem to be happening as the
subsequent writes cause the WAL file to grow indefinitely, eventually up to
some GBs.

I did some searching and found that that readers can cause checkpoint
failure due to open transactions. However, the only reader I'm using is
ending its transaction before the checkpoint starts. What else could be
preventing the WAL file from not growing?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 04:16 PM, Paul wrote:

Hi, Dan.


On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...



At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.

I don't think there is an easy fix for this one. The statement journal
is required, as SQLite may need to reinstate rows deleted by the REPLACE
processing if the FK constraint fails.

To fix this properly, it probably needs to use a more sophisticated data
structure than the statement journal. Which would complicate things
some. But at the moment it seems like SAVEPOINT and very large
transactions don't work well together.


I understand. I guess, I'll have to stick to UPDATE <-> INSERT.
Thank you for taking your time.

Just out of curiosity, I want to ask one more question.
How can FK constraint fail if I am removing (replacing) row from the 'child' 
table?


The FK constraint can fail because a new row is being inserted into the 
child table. The reason statement rollback may be required is because 
any replaced rows will be removed before SQLite has a chance to figure 
out if the INSERT actually does violate the PK constraint.


Dan.








Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-18 Thread Dan Kennedy

On 12/18/2014 02:41 PM, Paul wrote:

I want to confirm that issue is fixed for me.
Thanks again, Dan!


Please ignore this update, patch fixes this problem as well.


I want to add even more input for this issue.
I understand why there is implicit savepoint, when I remove row from 'parent' 
table.
But why is this also true for a 'child' table when I perform 'INSERT OR 
REPLACE'?
Removing FK reference disables journal growth. I don't understand...




At the end of the day my head was so big that I, having some ten different test 
cases,
have errourneously confirmed that 'INSERT OR REPLACE' is fixed also, by running 
wrong test.
But sadly it isn't. Here, I'll drop my test program again, for clarity.


I don't think there is an easy fix for this one. The statement journal 
is required, as SQLite may need to reinstate rows deleted by the REPLACE 
processing if the FK constraint fails.


To fix this properly, it probably needs to use a more sophisticated data 
structure than the statement journal. Which would complicate things 
some. But at the moment it seems like SAVEPOINT and very large 
transactions don't work well together.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-17 Thread Dan Kennedy

On 12/16/2014 03:08 PM, Paul wrote:

The memory is being used by the statement journal, which you have in
memory. If the app did not set "journal_mode=memory" and
"temp_store=memory", SQLite would create a really large temp file
instead of using memory. Which would still be sub-optimal, but might
not run into the 32-bit limit.

The reason the statement journal is growing so large is that SQLite
only truncates the statement journal when the outermost
sub-transaction is closed. Otherwise it just keeps on appending. i.e.

BEGIN;
SAVEPOINT xxx;
...
SAVEPOINT yyy;
...
RELEASE yyy; -- does not truncate statement journal
COMMIT xxx; -- truncates statement journal
COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and
each DELETE statement is opening a second, implicit, sub-transaction
(equivalent to yyy).

With the child records included, each DELETE statement is modifying 3
db pages - one from table "bar", one from table "foo" and one from the
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages
per delete == 1.2GiB. Or without the child records, just 1 page
modified per delete, so closer to 400MiB of memory. Without the
sub-transaction, the implicit sub-transaction created by each DELETE
becomes the outermost and so the statement journal doesn't grow much
at all. So not much memory used in that case.

Another idea would be to use a deferred foreign key constraint. That way
the DELETE operations will not need the statement journal at all.


How can I get around implicit savepoint creation?
Why doesn't savepoint commit truncate a journal?
Why does journal grow even when there is nothing to delete in bar?

Currently this limitation renders use of sqlite impossible, unless using ugly 
hacks.
 From the user's perspective, this overhead is unimaginable. This is completely
normal use of SQL yet overhead is above the wildest imagination :(

Also, I don not understand, how does it become outermost? Journal vener grows
if there is only single transaction (or savepoint) aroun 'delete loop'.
Why in case of just single transaction around deletes this does not happen?
Are you saying there is no YYY savepoint? Or the journal can be truncated
when omiting BEGIN oe XXX but not when they are both present?

Please don't mind my last message.

I understand now, what is going on. Yet this limitation is pretty depressing.
Is there no way in the future for things to change?
Can't journal be truncated, or modified pages be merged, after each 
consequential
implicit sub-transaction (YYY) release, while they are still in the cache?

Is there any hope for me, except deferred FKs or DELETE FROM foo WHERE id IN 
(...)?


There is now an update on the fossil trunk that should fix the problem 
with ON DELETE CASCADE:


http://www.sqlite.org/src/info/8c5dd6cc259e0cdaaddaa52ccfa96fee6b166906

Dan.






Best regards,
Paul
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_FTS3_MAX_EXPR_DEPTH and upgrading sqlite

2014-12-17 Thread Dan Kennedy

On 12/16/2014 10:57 PM, Ed Willis wrote:

Hello all,

Apologies in advance if this question has been asked and answered elsewhere – a 
(brief, admittedly) search did not turn up anything and so I’m posting this.

We’re in the process of upgrading sqlite in our service.  We were on a version 
which did not have the compile option SQLITE_FTS3_MAX_EXPR_DEPTH and are moving 
up to one that does.  As it turns out we have run into a problem with one of 
our clients where they hit this limit now where previously the query just 
worked.  My question is whether or not there’s any guidance on how to set this 
limit at compile time?  Part of my confusion is that I’m not sure what would 
have happened previously with no limit enforced (as was the case on our older 
version of sqlite) - was the risk stack exhaustion and a resulting crash or was 
it something else entirely?

Basically what I’m worried about is that we'll raise the limit to allow this 
one client to do their queries as they used to do, but will not know that we’ve 
raised it enough to allow all our clients to get the same behavior they were 
accustomed to?  How do people choose this limit?


The point of the setting is to prevent stack overflow. I guess to get 
the old behavior, set it to a very large value.


Are you able to post the FTS query that caused a problem with the 
default settings? Just the MATCH expression will be enough, we don't 
need the table schema or contents.


Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Dan Kennedy

On 12/15/2014 11:59 PM, Dan Kennedy wrote:

On 12/15/2014 11:11 PM, Paul wrote:

Hello, dear developers

Recently I've stumbled upon a very rare and strange bug.
The result of this is abnormal memory usage, that does not allow us 
to remove
fair number of rows from a table due to the limit of memory, 
available for 32bit

process. This is strange, because database size is somewhat small: 79M.
Digging around I finally managed to pinpoint when exactly does this 
issue occur.

Another stange thing, though is that memory is successfully deallocated,
bacause no matter what, valgrind does not report definitely lost memory.

I want to present you my test case. You have two options to 
manipulate it.

Two defines:
  - NO_NESTED_TRANSACTION desables nested transaction.
  - NO_CHILD_RECORDS disables population of 'child' table with data.

To compile:
# clang -o test -L/usr/local/lib -lsqlite3 test.c

My results

Without defines:
# clang -o test -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 1294136920

Not okay, 1.2GiB peak memory usage.


With NO_CHILD_RECORDS
# clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 421141176

A bit better, but still not ok.


With NO_NESTED_TRANSACTION:
# clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 
test.c

# ./test
   Current mem: 0
   Hi mem: 15100760

Seems ok.


With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
# clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS 
-L/usr/local/lib -lsqlite3 test.c

# ./test
   Current mem: 0
   Hi mem: 2554168

No doubt it's even better.


The memory is being used by the statement journal, which you have in 
memory. If the app did not set "journal_mode=memory" and 
"temp_store=memory", SQLite would create a really large temp file 
instead of using memory. Which would still be sub-optimal, but might 
not run into the 32-bit limit.


The reason the statement journal is growing so large is that SQLite 
only truncates the statement journal when the outermost 
sub-transaction is closed. Otherwise it just keeps on appending. i.e.


  BEGIN;
SAVEPOINT xxx;
  ...
  SAVEPOINT yyy;
  ...
  RELEASE yyy;   -- does not truncate statement journal
COMMIT xxx;  -- truncates statement journal
  COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and 
each DELETE statement is opening a second, implicit, sub-transaction 
(equivalent to yyy).


With the child records included, each DELETE statement is modifying 3 
db pages - one from table "bar", one from table "foo" and one from the 
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages 
per delete == 1.2GiB. Or without the child records, just 1 page 
modified per delete, so closer to 400MiB of memory. Without the 
sub-transaction, the implicit sub-transaction created by each DELETE 
becomes the outermost and so the statement journal doesn't grow much 
at all. So not much memory used in that case.


Another idea would be to use a deferred foreign key constraint. That way 
the DELETE operations will not need the statement journal at all.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Abnormal memory usage when removing rows within nested transaction from a table, referenced by another table via FK,

2014-12-15 Thread Dan Kennedy

On 12/15/2014 11:11 PM, Paul wrote:

Hello, dear developers

Recently I've stumbled upon a very rare and strange bug.
The result of this is abnormal memory usage, that does not allow us to remove
fair number of rows from a table due to the limit of memory, available for 32bit
process. This is strange, because database size is somewhat small: 79M.
Digging around I finally managed to pinpoint when exactly does this issue occur.
Another stange thing, though is that memory is successfully deallocated,
bacause no matter what, valgrind does not report definitely lost memory.

I want to present you my test case. You have two options to manipulate it.
Two defines:
  - NO_NESTED_TRANSACTION desables nested transaction.
  - NO_CHILD_RECORDS disables population of 'child' table with data.

To compile:
# clang -o test -L/usr/local/lib -lsqlite3 test.c

My results

Without defines:
# clang -o test -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 1294136920

Not okay, 1.2GiB peak memory usage.


With NO_CHILD_RECORDS
# clang -o test -DNO_CHILD_RECORDS -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 421141176

A bit better, but still not ok.


With NO_NESTED_TRANSACTION:
# clang -o test -DNO_NESTED_TRANSACTION -L/usr/local/lib -lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 15100760

Seems ok.


With NO_NESTED_TRANSACTION and NO_CHILD_RECORDS:
# clang -o test -DNO_NESTED_TRANSACTION -DNO_CHILD_RECORDS -L/usr/local/lib 
-lsqlite3 test.c
# ./test
   Current mem: 0
   Hi mem: 2554168

No doubt it's even better.


The memory is being used by the statement journal, which you have in 
memory. If the app did not set "journal_mode=memory" and 
"temp_store=memory", SQLite would create a really large temp file 
instead of using memory. Which would still be sub-optimal, but might not 
run into the 32-bit limit.


The reason the statement journal is growing so large is that SQLite only 
truncates the statement journal when the outermost sub-transaction is 
closed. Otherwise it just keeps on appending. i.e.


  BEGIN;
SAVEPOINT xxx;
  ...
  SAVEPOINT yyy;
  ...
  RELEASE yyy;   -- does not truncate statement journal
COMMIT xxx;  -- truncates statement journal
  COMMIT;

Your example has one explicit sub-transaction (equivalent to xxx) and 
each DELETE statement is opening a second, implicit, sub-transaction 
(equivalent to yyy).


With the child records included, each DELETE statement is modifying 3 db 
pages - one from table "bar", one from table "foo" and one from the 
PRIMARY KEY index on "foo". 100,000 deletes, 4Kib page size, 3 pages per 
delete == 1.2GiB. Or without the child records, just 1 page modified per 
delete, so closer to 400MiB of memory. Without the sub-transaction, the 
implicit sub-transaction created by each DELETE becomes the outermost 
and so the statement journal doesn't grow much at all. So not much 
memory used in that case.


Dan.














test.c

#include 
#include 
#include 
#include 
#include 

int main(int argc, char ** argv)
{
 const char * database_file = "/tmp/memusage_test_db";

 // Clear old database file is there is one.
 unlink(database_file);

 sqlite3 * db = NULL;
 if (sqlite3_open(database_file, ) != SQLITE_OK)
 return 1;

 // Set busy timeout just in case...
 if (sqlite3_busy_timeout(db, 1) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Set pragmas.
 if (sqlite3_exec(db,
  " PRAGMA page_size = 4096;"
  " PRAGMA temp_store = MEMORY;"
  " PRAGMA journal_mode = MEMORY;"
  " PRAGMA cache_size = 1;"
  " PRAGMA foreign_keys = ON;"
  " PRAGMA synchronous = OFF;",
  NULL, NULL, NULL) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Create database structure.
 if (sqlite3_exec(db,
  "CREATE TABLE foo ("
  "  id  INTEGER,"
  "  x   INTEGER,"
  "  PRIMARY KEY(id)"
  ");"
  ""
  "CREATE TABLE bar ("
  "  id  INTEGER,"
  "  y   INTEGER,"
  "  PRIMARY KEY(id, y),"
  "  FOREIGN KEY(id) REFERENCES foo(id) ON DELETE CASCADE"
  ");",
  NULL, NULL, NULL) != SQLITE_OK) {
 sqlite3_close(db);
 return 1;
 }

 // Populate database with data.
 {
 // Open transaction.
 if (sqlite3_exec(db, "BEGIN IMMEDIATE;", NULL, NULL, NULL) != 
SQLITE

Re: [sqlite] Query Planner for Virtual Tables: link table evaluation & transitive property of constraints not used

2014-12-15 Thread Dan Kennedy

On 12/12/2014 09:22 PM, Josef Kučera wrote:

Hello,
I am trying to use SQLite's marvellous Virtual Table mechanism as a SQL
layer for querying an in memory storage. This works good, but I have a
problem with more complex queries. When querying a real SQLite database it
correctly moves the constant conditions across joined tables to optimize
the execution plan (I think this was implemented in the 3.7.17 release).
Unfortunately for virtual tables this does not seem to be supported. I can
overcome this limitation by manually tuning the SQL, but it will help if
the query planner can do this automatically.

The major problem I have is with link table evaluation. Imagine a SQL like
"select * from A join B on A.ID=B.ID join C on C.ID=B.LINKID". The current
implementation evaluates cost of B only as B (ID, LINKID) causing the
execution to perform a full scan on either A or C. This seems to be caused
by the implementation of whereLoopAddVirtual() function. I think it should
evaluate cost for terms separated by tables in the right term as well, e.g.
for the mentioned SQL, table B, it should try B(), B(ID), B(LINKID), B(ID,
LINKID) instead of only B() and B(ID, LINKID).

What should I do?


You want this (or the same thing with the roles of "A" and "C" reversed):

  * a full-scan on A,
  * a lookup on B by (b.id=?)
  * a lookup on C by (c.id=?)

correct?

It's tricky. As you say, xBestIndex() will currently be invoked twice - 
once with no constraints usable and once with both "b.id=?" and 
"b.linkid=?" usable. I guess the reason it is not invoked in the other 
ways you suggest is that that strategy might conceivably require a huge 
number of xBestIndex() calls if there were more than a few other tables 
in the join.


You could change the query so that only one of the constraints is 
visible to the virtual table implementation. Say:


  select * from A join B on A.ID=B.ID join C on C.ID=+B.LINKID

Or rework the virtual table code so that it knows only to use one of 
"b.id=?" or "b.linkid=?" at a time. If the xBestIndex only uses one of 
the constraints, the planner should do the right thing.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-12 Thread Dan Kennedy

On 12/12/2014 03:31 AM, Nick wrote:

On 11 Dec 2014, at 10:08, Dan Kennedy wrote:


On 12/11/2014 05:49 AM, Nick wrote:

On 10 Dec 2014, at 07:35, Dan Kennedy wrote:


Strictly speaking the database file may not be well-formed even if there is no 
ongoing checkpoint. If:

  a) process A opens a read transaction,
  b) process B opens and commits a write transaction to the database,
  c) process C checkpoints the db,

then the db file considered without the *-wal file may be corrupt. The problem 
comes about because process C can only checkpoint frames up until the start of 
B's transaction. And there is an optimization that will prevent it from copying 
any earlier frames for which there exists a frame in B's transaction that 
corresponds to the same database page. So it effectively copis only a subset of 
the modifications made by earlier transactions into the db file - not 
necessarily creating a valid db file.

Can this corruption be detected by running PRAGMA quick_check / 
integrity_check? Having the occasional backup db corrupted would be tolerable.

In many cases, but not generally. There would exist cases where a part of a 
committed transaction was lost, or the values in unindexed columns where 
replaced, that sort of thing.

Ok. Presumably a SQLITE_CHECKPOINT_FULL or SQLITE_CHECKPOINT_RESTART 
checkpoint mode would ensure the db file is valid?


That sounds right. A successful FULL or RESTART checkpoint will always 
copy entire transactions into the db.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-11 Thread Dan Kennedy

On 12/11/2014 05:49 AM, Nick wrote:

On 10 Dec 2014, at 07:35, Dan Kennedy wrote:


Strictly speaking the database file may not be well-formed even if there is no 
ongoing checkpoint. If:

  a) process A opens a read transaction,
  b) process B opens and commits a write transaction to the database,
  c) process C checkpoints the db,

then the db file considered without the *-wal file may be corrupt. The problem 
comes about because process C can only checkpoint frames up until the start of 
B's transaction. And there is an optimization that will prevent it from copying 
any earlier frames for which there exists a frame in B's transaction that 
corresponds to the same database page. So it effectively copis only a subset of 
the modifications made by earlier transactions into the db file - not 
necessarily creating a valid db file.

Can this corruption be detected by running PRAGMA quick_check / 
integrity_check? Having the occasional backup db corrupted would be tolerable.


In many cases, but not generally. There would exist cases where a part 
of a committed transaction was lost, or the values in unindexed columns 
where replaced, that sort of thing.








Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Dan Kennedy

On 12/10/2014 05:06 AM, Simon Slavin wrote:

On 9 Dec 2014, at 8:57pm, Nick <maill...@css-uk.net> wrote:


Environment is Linux with multiple (c. 4-6) processes accessing a single sqlite database 
named "test.db".

Backup:
- New process started using cronjob to initiate application checkpoint until 
completion.
- rsync diff the file "test.db" to another drive/location (specifically ignoring the 
"-shm" and "-wal" file).
- exit process

Restore:
- rsync the file "test.db" from another drive/location.

Will not be trustworthy if the database is being written to during the rsync 
operations.




Strictly speaking the database file may not be well-formed even if there 
is no ongoing checkpoint. If:


  a) process A opens a read transaction,
  b) process B opens and commits a write transaction to the database,
  c) process C checkpoints the db,

then the db file considered without the *-wal file may be corrupt. The 
problem comes about because process C can only checkpoint frames up 
until the start of B's transaction. And there is an optimization that 
will prevent it from copying any earlier frames for which there exists a 
frame in B's transaction that corresponds to the same database page. So 
it effectively copis only a subset of the modifications made by earlier 
transactions into the db file - not necessarily creating a valid db file.


Dan.









A) Ensure all processes besides the backup process have the database closed 
while it is being copied. Establish some kind of semaphore so they can tell 
when it's safe to open the database again.

B) Use the SQLite Backup API which was invented to do what you want.





Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-08 Thread Dan Kennedy

On 12/08/2014 09:55 PM, Nico Williams wrote:

Ideally there would be something like DEFERRED foreign key checking
for uniqueness constraints...


You could hack SQLite to do enforce unique constraints the same way as 
FKs. When adding an entry to a UNIQUE index b-tree, you check for a 
duplicate. If one exists, increment a counter. Do the opposite when 
removing entries - decrement the counter if there are two or more 
duplicates of the entry you are removing. If your counter is greater 
than zero at commit time, a UNIQUE constraint has failed.


I suspect there would be a non-trivial increase in the CPU use of UPDATE 
statements though.







   You can get something like that by
using non-unique indexes (but there would also go your primary keys)
and then check that there are no duplicates before you COMMIT.  (Doing
this reliably would require something like transaction triggers, which
IIRC exists in a "sessions" branch.)

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Read overflow in CREATE VIRTUAL TABLE query (formerly Corrupting pointers to the lookaside smallacator)

2014-11-27 Thread Dan Kennedy

On 11/27/2014 05:56 PM, Paul wrote:

Currently we use various versions of SQLite:

SQLite version 3.8.0.1 2013-08-29 17:35:01
SQLite version 3.8.2 2013-12-06 14:53:30
SQLite version 3.8.6 2014-08-15 11:46:33
SQLite version 3.8.7 2014-10-17 11:24:17


All of them are affected so I never considered it to be an sqlite bug.
But analyzing core file it seems like very much an sqlite bug :/

Tell me if you need more info on this.



Thanks for tracking this down.Should be fixed here:

  http://www.sqlite.org/src/info/f095cde579e7417306

As far as I can see this is "just" a buffer overread - there is no 
chance of an overwrite or database corruption. Easiest workaround is to 
append "()" to your CREATE VIRTUAL TABLE statement. i.e.


  CREATE VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343()

Dan.






Thanks.


On 11/27/2014 03:20 PM, Paul wrote:

Here is how it looks with debug symbols are on:

#0 0x28c4113e in memcpy () from /lib/libc.so.7
#1 0x08854c20 in sqlite3StrAccumAppend (p=0xfffe8548, z=0x2c3fffda "vtb_enyqkyxs 
USING vtable_module_343", N=41) at sqlite3.c:21563
#2 0x087edf30 in sqlite3VXPrintf (pAccum=0xfffe8548, bFlags=1, fmt=0x892e543 "T", 
ap=0xfffe8610 "") at sqlite3.c:21439
#3 0x088077d5 in sqlite3VMPrintf (db=0x2c006788, zFormat=0x892e52d "CREATE VIRTUAL TABLE 
%T", ap=0xfffe860c "xx") at sqlite3.c:21638
#4 0x087f815e in sqlite3MPrintf (db=0x2c006788, zFormat=0x892e52d "CREATE VIRTUAL 
TABLE %T") at sqlite3.c:21654
#5 0x088759af in sqlite3VtabFinishParse (pParse=0x2c007688, pEnd=0x0) at 
sqlite3.c:112383
#6 0x0885bb21 in yy_reduce (yypParser=0x2c1d1408, yyruleno=309) at 
sqlite3.c:123403
#7 0x08856180 in sqlite3Parser (yyp=0x2c1d1408, yymajor=1, yyminor=..., 
pParse=0x2c007688) at sqlite3.c:123629
#8 0x087fc289 in sqlite3RunParser (pParse=0x2c007688, zSql=0x2c3fffc0 "CREATE 
VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", pzErrMsg=0xfffe8bc4) at 
sqlite3.c:124466
#9 0x088bbc82 in sqlite3Prepare (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL 
TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, saveSqlFlag=1, 
pReprepare=0x0, ppStmt=0xfffe8d0c, pzTail=0xfffe8d10) at sqlite3.c:103750
#10 0x087fb0ce in sqlite3LockAndPrepare (db=0x2c006788, zSql=0x2c3fffc0 "CREATE 
VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, saveSqlFlag=1, 
pOld=0x0, ppStmt=0xfffe8d0c, pzTail=0xfffe8d10) at sqlite3.c:103842
#11 0x087fa504 in sqlite3_prepare_v2 (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL 
TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, ppStmt=0xfffe8d0c, 
pzTail=0xfffe8d10) at sqlite3.c:103918
#12 0x087fa01d in sqlite3_exec (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL TABLE 
temp.vtb_enyqkyxs USING vtable_module_343", xCallback=0x0, pArg=0x0, pzErrMsg=0x0) 
at sqlite3.c:99345
#13 0x086588e7 in sqlite::StorageBase::exec_query (this=0x2c3ff640, query=0x2c3fffc0 
"CREATE VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", quiet=false) 
at SqliteStorageBase.cpp:286


Interesting part is in frame #5

#5 0x088759af in sqlite3VtabFinishParse (pParse=0x2c007688, pEnd=0x0) at 
sqlite3.c:112383
112383 zStmt = sqlite3MPrintf(db, "CREATE VIRTUAL TABLE %T", 
>sNameToken);
(gdb) p pParse->sNameToken
$12 = {
z = 0x2c3fffda "vtb_enyqkyxs USING vtable_module_343",
n = 41
}
(gdb) p pParse->sNameToken.z + 40
$13 = 0x2c42 

As you can see, token size is invalid: 41. This causes memcpy() down at #0 to 
access invalid, unmapped address.
Hopefully it is only read overflow so the only consequence is just an 
occasional Segmentation fault when allocated
piece of string is at the specific place: near the end of the page in front of 
unmapped page.

Should I fill a bug report?

It's certainly very suspicious. Which SQLite version are you using?

Dan.






Is there a way to work this around?

Like append many spaces a the end of query?
Or maybe the problem is in absence of ';' at the end of query?
Meantime I'll try both of these cases.




We observe very similar problem.

#1 0x087ec9f7 in sqlite3VXPrintf ()
#2 0x087f816d in sqlite3MPrintf ()
#3 0x088781e5 in sqlite3VtabFinishParse ()
#4 0x0885190f in yy_reduce ()
#5 0x0884d4d8 in sqlite3Parser ()
#6 0x087fc0ce in sqlite3RunParser ()
#7 0x088aa396 in sqlite3Prepare ()
#8 0x087fae18 in sqlite3LockAndPrepare ()
#9 0x087f9a88 in sqlite3_exec ()
#10 0x086588a7 in sqlite::StorageBase::exec_query (this=0x2c2a47c0, query=0x2c7fffc0 
"CREATE VIRTUAL TABLE temp.vtb_wayxzmbo USING vtable_module_344", quiet=false) 
at SqliteStorageBase.cpp:286

It always crashes when "CREATE VIRTUAL TABLE ..." is being executed, always 
with the same backtrace.
I spent many days reviewing and testing my code to eliminate possible cause but 
so far I see nothing wrong with it.
Probability of this crash is so very low so that problem can be reproduced 

Re: [sqlite] Corrupting pointers to the lookaside smallacator

2014-11-27 Thread Dan Kennedy

On 11/27/2014 03:20 PM, Paul wrote:

Here is how it looks with debug symbols are on:

#0  0x28c4113e in memcpy () from /lib/libc.so.7
#1  0x08854c20 in sqlite3StrAccumAppend (p=0xfffe8548, z=0x2c3fffda "vtb_enyqkyxs 
USING vtable_module_343", N=41) at sqlite3.c:21563
#2  0x087edf30 in sqlite3VXPrintf (pAccum=0xfffe8548, bFlags=1, fmt=0x892e543 "T", 
ap=0xfffe8610 "") at sqlite3.c:21439
#3  0x088077d5 in sqlite3VMPrintf (db=0x2c006788, zFormat=0x892e52d "CREATE VIRTUAL TABLE 
%T", ap=0xfffe860c "xx") at sqlite3.c:21638
#4  0x087f815e in sqlite3MPrintf (db=0x2c006788, zFormat=0x892e52d "CREATE VIRTUAL 
TABLE %T") at sqlite3.c:21654
#5  0x088759af in sqlite3VtabFinishParse (pParse=0x2c007688, pEnd=0x0) at 
sqlite3.c:112383
#6  0x0885bb21 in yy_reduce (yypParser=0x2c1d1408, yyruleno=309) at 
sqlite3.c:123403
#7  0x08856180 in sqlite3Parser (yyp=0x2c1d1408, yymajor=1, yyminor=..., 
pParse=0x2c007688) at sqlite3.c:123629
#8  0x087fc289 in sqlite3RunParser (pParse=0x2c007688, zSql=0x2c3fffc0 "CREATE 
VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", pzErrMsg=0xfffe8bc4) at 
sqlite3.c:124466
#9  0x088bbc82 in sqlite3Prepare (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL 
TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, saveSqlFlag=1, 
pReprepare=0x0, ppStmt=0xfffe8d0c, pzTail=0xfffe8d10) at sqlite3.c:103750
#10 0x087fb0ce in sqlite3LockAndPrepare (db=0x2c006788, zSql=0x2c3fffc0 "CREATE 
VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, saveSqlFlag=1, 
pOld=0x0, ppStmt=0xfffe8d0c, pzTail=0xfffe8d10) at sqlite3.c:103842
#11 0x087fa504 in sqlite3_prepare_v2 (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL 
TABLE temp.vtb_enyqkyxs USING vtable_module_343", nBytes=-1, ppStmt=0xfffe8d0c, 
pzTail=0xfffe8d10) at sqlite3.c:103918
#12 0x087fa01d in sqlite3_exec (db=0x2c006788, zSql=0x2c3fffc0 "CREATE VIRTUAL TABLE 
temp.vtb_enyqkyxs USING vtable_module_343", xCallback=0x0, pArg=0x0, pzErrMsg=0x0) 
at sqlite3.c:99345
#13 0x086588e7 in sqlite::StorageBase::exec_query (this=0x2c3ff640, query=0x2c3fffc0 
"CREATE VIRTUAL TABLE temp.vtb_enyqkyxs USING vtable_module_343", quiet=false) 
at SqliteStorageBase.cpp:286
   


Interesting part is in frame #5

#5  0x088759af in sqlite3VtabFinishParse (pParse=0x2c007688, pEnd=0x0) at 
sqlite3.c:112383
112383  zStmt = sqlite3MPrintf(db, "CREATE VIRTUAL TABLE %T", 
>sNameToken);
(gdb) p pParse->sNameToken
$12 = {
   z = 0x2c3fffda "vtb_enyqkyxs USING vtable_module_343",
   n = 41
}
(gdb) p  pParse->sNameToken.z + 40
$13 = 0x2c42 

As you can see, token size is invalid: 41. This causes memcpy() down at #0 to 
access invalid, unmapped address.
Hopefully it is only read overflow so the only consequence is just an 
occasional Segmentation fault when allocated
piece of string is at the specific place: near the end of the page in front of 
unmapped page.

Should I fill a bug report?


It's certainly very suspicious. Which SQLite version are you using?

Dan.







Is there a way to work this around?

Like append many spaces a the end of query?
Or maybe the problem is in absence of ';' at the end of query?
Meantime I'll try both of these cases.


  

We observe very similar problem.

#1 0x087ec9f7 in sqlite3VXPrintf ()
#2 0x087f816d in sqlite3MPrintf ()
#3 0x088781e5 in sqlite3VtabFinishParse ()
#4 0x0885190f in yy_reduce ()
#5 0x0884d4d8 in sqlite3Parser ()
#6 0x087fc0ce in sqlite3RunParser ()
#7 0x088aa396 in sqlite3Prepare ()
#8 0x087fae18 in sqlite3LockAndPrepare ()
#9 0x087f9a88 in sqlite3_exec ()
#10 0x086588a7 in sqlite::StorageBase::exec_query (this=0x2c2a47c0, query=0x2c7fffc0 
"CREATE VIRTUAL TABLE temp.vtb_wayxzmbo USING vtable_module_344", quiet=false) 
at SqliteStorageBase.cpp:286

It always crashes when "CREATE VIRTUAL TABLE ..." is being executed, always 
with the same backtrace.
I spent many days reviewing and testing my code to eliminate possible cause but 
so far I see nothing wrong with it.
Probability of this crash is so very low so that problem can be reproduced only 
on hi loaded production servers.
(Where such virtual tables are created and dropped millions of times during a 
day)

I am going to compile sqlite without optimizations and with debug symbols and 
wait for a crash
to try and track the root of the problem from within sqlite.

Though I doubt very much this is sqlite problem at all and not an incorrect 
vtable implementation on my side.


SQLite version 3.8.6 2014-08-15 11:46:33


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Corrupting pointers to the lookaside smallacator

2014-11-25 Thread Dan Kennedy

On 11/26/2014 06:47 AM, Ward Willats wrote:

We are compiling the 3.8.7.1 using clang arm64 for iOS. Following set:

#define SQLITE_ENABLE_COLUMN_METADATA 1
#define HAVE_INTTYPES_H 1
#define HAVE_STDINT_H 1
#define HAVE_USLEEP 1

#define SQLITE_DEBUG 1
#define SQLITE_MEMDEBUG 1

WAL mode.

In MallowRaw(), very rarely, seeing the lookaside buffer pBuf or pBuf->next 
getting corrupted with ASCII from our CREATE TABLE statements. ('INTEGER' or part 
of one of our field names). Thing is, we are not running the schema create code on 
these runs (the DB already exists), so these strings must be coming from 
sqlite_master, AFAIKT.

None of the SQLITE_DEBUG or SQLITE_MEMDEBUG asserts fire. When it happens, it 
happens early in the application run.


Maybe try with both of those and SQLITE_OMIT_LOOKASIDE as well.

When SQLITE_DEBUG and MEMDEBUG are defined, fenceposts are used to 
detect buffer overruns for all regular malloc/free allocations. If 
lookaside is omitted these fencepost checks will be done for small 
allocations as well, which may reveal the source of the memory corruption.


An assert() will fail if any of the fencepost checks indicate a buffer 
overwrite.


Dan.







Hard to set a hardware write breakpoint on such a mutable thing.

I fully believe the problem is of my own creation, but interested in any 
thoughts or if anyone has seen anything like this.

Thanks, as always,

-- Ward

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Infinite loop in sqlite3VdbeSorterWrite when sorting big data

2014-11-25 Thread Dan Kennedy

On 11/26/2014 12:41 AM, Marcin Sobieszczanski wrote:

Do you have a large cache-size configured?

Yes:

PRAGMA cache_size = 10
PRAGMA page_size = 16384


Thanks for reporting this. I think it should be fixed here:

  http://www.sqlite.org/src/info/623827192532f08b

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Infinite loop in sqlite3VdbeSorterWrite when sorting big data

2014-11-25 Thread Dan Kennedy

On 11/25/2014 02:53 AM, Marcin Sobieszczanski wrote:

Hi

I work with sqlite files that have a few gigabytes of simple data.
Almost all of the data sits in one table that has 9 non-null integer
columns (including row_id, and one int64 column) plus 3 additional
string or int columns (additional columns are filled with nulls
mostly). The table has tens of millions of rows. (I need to
cross-correlate this data.)
Recently, I upgraded to 3.8.7.2 and I've notice that CREATE INDEX on a
table's 3 columns hangs in the new version of sqlite (last OK
versions: 3.8.4 and 3.8.6).

I located the code in sqlite3VdbeSorterWrite: there is only one loop there:
   while( nNew < nMin ) nNew = nNew*2;
In my case nNew == 0 (inspected with gdb; but without symbols). I
believe the multiplication in the line above overflown:
   int nNew = pSorter->nMemory * 2;

Last strace lines show series of re-allocations:
mremap(0x7f8241fff000, 33558528, 67112960, MREMAP_MAYMOVE) = 0x7f81b7fff000
...
mremap(0x7f819fffd000, 268439552, 536875008, MREMAP_MAYMOVE) = 0x7f817fffc000
mremap(0x7f817fffc000, 536875008, 1073745920, MREMAP_MAYMOVE) = 0x7f813fffb000

The next mremap would have tried to allocate 2147487744 bytes
(2147487744=2^31+4Ki, extra 4Ki from glibc), I guess, but int is not
capable to hold more than 2Gi.

Is this a known problem?


It is now.

Do you have a large cache-size configured?

Thanks,
Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Null rowid when using CTEs

2014-11-14 Thread Dan Kennedy

On 11/14/2014 08:33 AM, RP McMurphy wrote:

When using CTEs the rowid is null. The following returns "--NULL--"

.null --NULL--
with c(x) as (select 1)
select rowid from c;


I guess that's expected. The same behaviour as for reading the rowid of 
a view or sub-select. It should probably be documented though.


Dan.





RP
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sliteonline or sqlite in js

2014-11-14 Thread Dan Kennedy

On 11/14/2014 03:37 PM, Noel Frankinet wrote:

Impressive !


It is that. Perhaps not as functional as a native app (so far), but 
looking really good! And you can't beat the deployment.


Dan.





On 12 November 2014 12:08, Kirill <kir...@aidagw.com> wrote:


Good day,

Full line manager to work with sqlite directly
from the browser on any platform:
http://sqliteonline.com/

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users






___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using sqlite3_backup* while the destination connection has "busy" read-only statements.

2014-11-13 Thread Dan Kennedy

On 11/13/2014 05:26 AM, Shaun Seckman (Firaxis) wrote:

Greetings all,
I'm running into a situation in where our application is crashing during a call 
to sqlite_backup_finish inside of btreeParseCellPtr because some of the 
structure is corrupted.

Both the source and destination database are using the same page sizes and I'm 
running everything from within a single-thread.

I'd like to get clarification on one possible issue in order to determine 
whether this is the culprit or at least rule it out.  This destination 
connection (which is an in-memory database) has several prepared statements, 
some of which are marked as busy (from sqlite3_stmt_busy) due to having step 
called but not reset.  All of these statements are read-only (simple select 
statements).

My question is, is it safe to have these busy statements on the destination 
connection while the backup happens (again, this is all single threaded so 
nothing happens DURING the backup process)?


It's not safe.

There are few reasons for this internally - the most obvious is that the 
backup operation might change the database schema.


The docs seem to omit this, but basically leaving the active statements 
around while the backup happens is the same as using the destination 
database while the backup is ongoing. See the "Concurrent Usage of 
Database Handles" section at the end of this page:


  http://www.sqlite.org/c3ref/backup_finish.html



If not,  then should sqlite3_backup_init check for this (at least in debug) and 
return NULL if any statements are busy?


I think we likely should. In both debug and non-debug builds.

Thanks for reporting this.

Dan.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [RFE bug] Improve error reporting for foreign keys

2014-10-27 Thread Dan Kennedy

On 10/27/2014 01:49 PM, Tristan Van Berkom wrote:

This is just a request-for-enhancement bug report, I've went to the
trouble or reproducing this problem in a simple test case and while
I probably wont be able to immediately benefit from an upstream fix
for this, I hope that this bug report will be perceived as helpful
and entered into your upstream bug tracking system (which seems to
not be public at this time).

This week I ran into a bug in my ~700 line complex schema which was
very hard to find.

After trying many things, including stepping through the
locateFKeyIndex() function issuing the not-so-informative
message "foreign key mismatch", I finally found that the
error was coming from another table with an incorrectly defined
foreign key.


Refering to function locateFKeyIndex() indicates you are looking at 
sources at least 22 months old. That particular error message has been 
improved since then. It's not quite as detailed as your proposal, but it 
is much better than it was.


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Installing 3.8.7 gives 3.8.6

2014-10-25 Thread Dan Kennedy

On 10/25/2014 09:36 PM, Arnaud Meuret wrote:

Hi everyone,

On a fresh CentOS 5, compiling SQLite using the [current tarball][1] 
for 3.8.7, the version being installed ends up in `/usr/local/lib` as 
**3.8.6**:


bash-3.2# make install
make[1]: Entering directory `/mongrel2/sqlite-autoconf-3080700'
 /bin/mkdir -p '/usr/local/lib'
 /bin/sh ./libtool   --mode=install /usr/bin/install -c 
libsqlite3.la '/usr/local/lib'
libtool: install: /usr/bin/install -c .libs/libsqlite3.so.0.8.6 
/usr/local/lib/libsqlite3.so.0.8.6
libtool: install: (cd /usr/local/lib && { ln -s -f 
libsqlite3.so.0.8.6 libsqlite3.so.0 || { rm -f libsqlite3.so.0 && ln 
-s libsqlite3.so.0.8.6 libsqlite3.so.0; }; })
libtool: install: (cd /usr/local/lib && { ln -s -f 
libsqlite3.so.0.8.6 libsqlite3.so || { rm -f libsqlite3.so && ln -s 
libsqlite3.so.0.8.6 libsqlite3.so; }; })
libtool: install: /usr/bin/install -c .libs/libsqlite3.lai 
/usr/local/lib/libsqlite3.la
libtool: install: /usr/bin/install -c .libs/libsqlite3.a 
/usr/local/lib/libsqlite3.a


 [1]: https://www.sqlite.org/2014/sqlite-autoconf-3080700.tar.gz


Is it normal ? I expected to find libsqlite3.so.0.8.7.


It's normal. I think that number is only supposed to change if the new 
release is not ABI compatible with the previous. It's a coincidence that 
"8.6" matches the tail of the previous release number. Earlier releases 
also used "libsqlite3.so.0.8.6".


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug 993556 - SQLite crash in walIndexTryHdr due to Windows EXCEPTION_IN_PAGE_ERROR exception

2014-10-20 Thread Dan Kennedy

On 10/18/2014 05:45 AM, Deon Brewis wrote:

I'm trying to follow Richard's advise to work around this issue, which is:

"Is that database ever used by more than a single process.  (Use by multiple
threads using separate connections does not count - I mean really used by
multiple processes with their own address space.)  If not (and I think the
answer is "no") then FF could set "PRAGMA locking_mode=EXCLUSIVE"
immediately after opening the database and before doing anything else.  If
that is done, then SQLite will use heap memory for the WAL-index, instead of
mmapped shared memory, and this problem will never come up."


However, I'm unable to do so. I'm using multiple threads using separate
connections, like mentioned, but when I try to use PRAGMA
locking_mode=EXCLUSIVE, the next thread that tries to open a connection will
block indefinitely on the open.

So how can I go about using PRAGMA locking_mode=EXCLUSIVE while still using
multiple threads with connections?


I think the quoted paragraph above assumes that the application is using 
shared-cache mode, which is probably not what you want to do.


If you're on unix, using the VFS "unix-excl" causes a similar effect. 
Second section here:


  http://www.sqlite.org/vfs.html

Dan.









--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Bug-993556-SQLite-crash-in-walIndexTryHdr-due-to-Windows-EXCEPTION-IN-PAGE-ERROR-exception-tp78695.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] INSERT OR REPLACE in a virtual table implementation....

2014-10-17 Thread Dan Kennedy

On 10/18/2014 01:07 AM, dave wrote:

I have a virtual table implementation, and I would like to use the INSERT OR
REPLACE syntax to simplify actions for the user.  In my xUpdate method, for
the case where insertion is occuring,

else if ( argc > 1 && SQLITE_NULL == sqlite3_value_type ( argv[0] ) ) {

I do check a uniqueness constraint, and return an error
SQLITE_CONSTRAINT_UNIQUE, which I presume is the right thing to do.
  
So, my question is, is this the right thing to do to get INSERT OR REPLACE

on a virtual table, or something else, or maybe that syntax is not supported
on virtual tables?


I don't think so. See the final paragraph here:

  http://sqlite.org/c3ref/c_vtab_constraint_support.html

Looks like you have to implement the "OR REPLACE" support in the xUpdate 
method.


Dan.



  
Thanks,
  
-dave

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unable to prepare a statement

2014-10-15 Thread Dan Kennedy

On 10/15/2014 07:19 AM, Sam Carleton wrote:

When I use the SQLite Manager, I am able to run this query just fine:

 UPDATE EventNode
SET IsActive = 1
  WHERE EventNodeId IN (SELECT w.EventNodeId
  FROM EventNode as w, EventNode as m on
m.objectId = 'a09f0f8a-a37c-44c2-846f-16a59b1c34c1'
  WHERE w.lft BETWEEN m.lft AND m.rgt )

But when I try to prepare the same statement to be used with my C++ code:

 const char * updateString =
 "UPDATE EventNode "
"SET IsActive = @isActive "
  "WHERE EventNodeId IN (SELECT w.EventNodeId "
  "FROM EventNode AS w, EventNode AS m ON
m.objectId = @objectId "
 "WHERE w.lft BETWEEN m.lft AND m.rgt)";

I get an error where sqlite3_errmsg() returns: no such table: EventNode

Now the code that is opening the DB is in a base class which is used other
places to access the EventNode table, so I am a bit mystified as to what
exactly is going on.  Any thoughts?


Perhaps it's opening a different database file.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Lets try this again. Sqlite Python libraries throwing exception on unique constraint.

2014-10-13 Thread Dan Kennedy

On 10/13/2014 11:44 PM, Jeffrey Parker wrote:

Hello,

I am working with sqlite3 in python 2.7.8 and I am running into a strange
error where I get the below exception when running an insert into statement
on an empty table. I know this is probably more to do with the python
libraries but I thought that I would ask the question here in case someone
has any recommendations.

sqlite3.IntegrityError: column VersionName is not unique

The insert still works and when I do the next insert into that table there
is no exception.


It sounds like you are attempting to insert a duplicate row.

You could temporarily add a SELECT statement before the INSERT to check 
if a record with the specified "versionName" is already present in the 
table.


Dan.





There are multiple insert statements and multiple tables. The insert
statements look like below.

params = versionName, versionNumber, updateFrom, metafileURL, patchURL, sha1
conn.execute("INSERT INTO UpdateFrom VALUES(?,?,?,?,?,?)", params)

and the table looks like below.

CREATE TABLE `UpdateFrom` (

`VersionName` TEXT NOT NULL UNIQUE,

`VersionNumber` INTEGER NOT NULL,

`UpdateFrom` INTEGER NOT NULL,

`MetafileURL` TEXT NOT NULL,

`PatchURL` TEXT NOT NULL,

`SHA1` TEXT NOT NULL

);
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full text search: Regression on 3.8.7 (or using incorrectly?)

2014-10-09 Thread Dan Kennedy

On 10/09/2014 07:23 PM, Sohail Somani wrote:

On 2014-10-09, 7:32 AM, Dan Kennedy wrote:

Got it, thanks for the explanation. Just to make sure that I
understand you correctly, is the clause MATCH '*l0l* *h4x*' getting
translated to MATCH 'l0l* h4x*'?


Yes, that's right.

Dan.


In that case, shouldn't the test in the original post have returned 
the same results for both cases?


Fair point. Fixed here: http://www.sqlite.org/src/info/49dfee7cd1c9

Dan.






___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full text search: Regression on 3.8.7 (or using incorrectly?)

2014-10-09 Thread Dan Kennedy

On 10/09/2014 01:13 AM, Sohail Somani wrote:

On 2014-10-07, 4:04 PM, Dan Kennedy wrote:

On 10/08/2014 01:52 AM, Sohail Somani wrote:

Figured it out: match terms should be "l0l* h4x*" NOT "*l0l* *h4x*",
though it did work as expected with the older version. I'd suggest
keeping the old behaviour unless there is a performance-based reason
not to.

On 2014-10-07, 2:49 PM, Sohail Somani wrote:

SELECT COUNT(*) FROM t_fts WHERE t_fts MATCH '*l0l* *h4x*';

COMMIT;

BEGIN TRANSACTION;

INSERT INTO t(key,value0,value1) VALUES('$key','l0l','h4x');

COMMIT;

SELECT '--- 2 ---';
SELECT COUNT(*) FROM t_fts WHERE t_fts MATCH '*l0l* *h4x*';


Unfortunately, this was a break with backwards compatibility in response
to this issue:

https://www.mail-archive.com/sqlite-users@sqlite.org/msg83345.html


Before:

   http://www.sqlite.org/src/info/e21bf7a2ade6373e

(version 3.8.6), it was up to the specific tokenizer being used whether
or not the special characters *, ", ( and ) were available to the query
parser. After that commit, they are stripped out first. So with the new
version, your query is now equivalent to "MATCH '101* h4x*'".

This should only affect FTS tables that use custom tokenizers (not the
default simple or porter tokenizers).


Got it, thanks for the explanation. Just to make sure that I 
understand you correctly, is the clause MATCH '*l0l* *h4x*' getting 
translated to MATCH 'l0l* h4x*'?


Yes, that's right.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 50% faster than 3.7.17

2014-10-09 Thread Dan Kennedy

On 10/09/2014 04:38 PM, David Woodhouse wrote:

On Thu, 2014-09-25 at 11:13 +0100, David Woodhouse wrote:

I suggested a couple of specific optimisations which the query planner
might be able to make, which should hopefully have benefits wider than
just my own use case. Are those not viable?

I'm preparing to commit a workaround to Evolution to avoid this issue,
and then move on with my life and forget about it.

Before I do, is it worth me rephrasing this as a 'suboptimal query plan'
bug report so it gets tracked and might get attention later? Or should I
just forget the idea of getting it fixed in sqlite?



Well, you could always create a patch...

I think I understand the second optimization. You're saying that given this:

  SELECT DISTINCT main.uid
 FROM main LEFT JOIN email_list ON main.uid = email_list.uid
 WHERE email_list.email LIKE 'al%'

SQLite should deduce that since the LIKE implies "email_list.email IS NOT NULL" 
the LEFT JOIN is equivalent to a regular JOIN. Which would allow SQLite to reorder the 
tables and perhaps come up with a more efficient query plan. Correct?

Seems like a reasonable idea.

The first optimization might be trickier. With queries that feature a single 
table:

  SELECT cols FROM tbl WHERE a=? OR b=? OR c=?

the planner may elect to run something very close to this:

  SELECT cols FROM tbl WHERE a=?
UNION ALL
  SELECT cols FROM tbl WHERE b=?
UNION ALL
  SELECT cols FROM tbl WHERE c=?

However, after returning each row, we remember its PRIMARY KEY (either the 
rowid or real PK for WITHOUT ROWID tables). Similar transformations for 
individual loops within join queries are also possible.

However, with a JOIN query, we don't currently attempt this kind of transform. 
I think because we would have to create some kind of composite key to use in 
place of the PRIMARY KEY to avoid returning duplicates. I guess queries that 
have a DISTINCT clause don't have this problem. So it could in theory transform 
your query to:

  SELECT DISTINCT
   main.uid FROM main LEFT JOIN email_list ON main.uid = email_list.uid
   WHERE email_list.email LIKE 'al%'
UNION ALL
  SELECT main.uid FROM main LEFT JOIN email_list ON main.uid = email_list.uid
   WHERE main.first_name like 'al%'
UNION ALL
  SELECT main.uid FROM main LEFT JOIN email_list ON main.uid = email_list.uid
   WHERE main.last_name like 'al%';

The the hypothetical optimization above could figure out that the first LEFT 
JOIN could just as easily be a JOIN.

And that the other two LEFT JOINs are not required at all due to the DISTINCT 
and the fact that the WHERE clauses on the sub-selects do not reference the 
joined table at all.

It seems like there are a few moving parts here. And none of these are trivial 
changes. So, good ideas that might show up in an SQLite release at some point, 
but it's not realistic to expect these optimizations to be implemented quickly. 
Unless, of course, you can propose a patch and they turn out to be simpler than 
they look.

Regards,
Dan.






___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full text search: Regression on 3.8.7 (or using incorrectly?)

2014-10-07 Thread Dan Kennedy

On 10/08/2014 01:52 AM, Sohail Somani wrote:
Figured it out: match terms should be "l0l* h4x*" NOT "*l0l* *h4x*", 
though it did work as expected with the older version. I'd suggest 
keeping the old behaviour unless there is a performance-based reason 
not to.


On 2014-10-07, 2:49 PM, Sohail Somani wrote:

SELECT COUNT(*) FROM t_fts WHERE t_fts MATCH '*l0l* *h4x*';

COMMIT;

BEGIN TRANSACTION;

INSERT INTO t(key,value0,value1) VALUES('$key','l0l','h4x');

COMMIT;

SELECT '--- 2 ---';
SELECT COUNT(*) FROM t_fts WHERE t_fts MATCH '*l0l* *h4x*';



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Unfortunately, this was a break with backwards compatibility in response 
to this issue:


https://www.mail-archive.com/sqlite-users@sqlite.org/msg83345.html

Before:

  http://www.sqlite.org/src/info/e21bf7a2ade6373e

(version 3.8.6), it was up to the specific tokenizer being used whether 
or not the special characters *, ", ( and ) were available to the query 
parser. After that commit, they are stripped out first. So with the new 
version, your query is now equivalent to "MATCH '101* h4x*'".


This should only affect FTS tables that use custom tokenizers (not the 
default simple or porter tokenizers).


Dan.







___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inefficient query plan with ORDER BY and UNION ALL subquery

2014-09-20 Thread Dan Kennedy

On 09/20/2014 10:59 AM, Yuanzhong Xu wrote:

I think this is related to a check for restriction (18) in subquery flattening.

(18) If the sub-query is a compound select, then all terms of the
ORDER by clause of the parent must be simple references to
columns of the sub-query.


Quite correct. Internally, SQLite 'flattens' the first query to:

   SELECT id, data FROM t1 WHERE id=10
 UNION ALL
   SELECT id, data FROM t2 WHERE id=10
   ORDER BY data

Which can be executed efficiently. But it does not currently do the same 
trick for the second, as:


   SELECT id FROM t1 WHERE id=10
 UNION ALL
   SELECT id FROM t2 WHERE id=10
   ORDER BY data

is not a valid statement in SQLite.

Dan.





Thanks,
Yuanzhong

On Fri, Sep 19, 2014 at 8:16 PM, Yuanzhong Xu <y...@utexas.edu> wrote:

Hi,

In SQLite 3.8.6, suppose I have two tables in the same format:

CREATE TABLE t1 (id INTEGER PRIMARY KEY AUTOINCREMENT, data TEXT);
CREATE TABLE t2 (id INTEGER PRIMARY KEY AUTOINCREMENT, data TEXT);


The query plan can be quite efficient for:

EXPLAIN QUERY PLAN SELECT id,data FROM (SELECT * FROM t1 UNION ALL
SELECT * FROM t2) WHERE id=10 ORDER BY data;

1|0|0|SEARCH TABLE t1 USING INTEGER PRIMARY KEY (rowid=?)
2|0|0|SEARCH TABLE t2 USING INTEGER PRIMARY KEY (rowid=?)
0|0|0|COMPOUND SUBQUERIES 1 AND 2 (UNION ALL)

However, if you remove "data" column from the outer SELECT, it becomes
very inefficient:

EXPLAIN QUERY PLAN SELECT id FROM (SELECT * FROM t1 UNION ALL SELECT *
FROM t2) WHERE id=10 ORDER BY data;

2|0|0|SCAN TABLE t1
3|0|0|SCAN TABLE t2
1|0|0|COMPOUND SUBQUERIES 2 AND 3 (UNION ALL)
0|0|0|SCAN SUBQUERY 1
0|0|0|USE TEMP B-TREE FOR ORDER BY


Thanks,
--
Yuanzhong Xu
Ph.D. Student
Department of Computer Science
the University of Texas at Austin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic database structure initialization

2014-09-19 Thread Dan Kennedy

On 09/19/2014 02:53 PM, Paul wrote:



Paul wrote:

My goal is to make structure initialization of an *abstract* database atomic.
[...] if database file is missing it must be created and initialized.



Just do the check for the database structure and the initialization inside
a transaction.

Yeah, but you didn't read my statements carefully. I was talking about an 
abstract database.
It's wrappers responsibility to call *on_create* so it's wrappers 
responsibility to check.
How do you check if structure is initializad in an abstract databse?



SQLITE_OPEN_EXCLUSIVE

This flag does not mean what you think it means, and is used only
internally; you are not allowed to use it.

Anyway, transactions are automatically exclusive.


I think I know what it means:

**The SQLITE_OPEN_EXCLUSIVE flag is always used in conjunction with the 
SQLITE_OPEN_CREATE flag, which are both directly analogous to the O_EXCL and 
O_CREAT flags of the POSIX open() API. The SQLITE_OPEN_EXCLUSIVE flag, when 
paired with the SQLITE_OPEN_CREATE, is used to indicate that file should always 
be created, and that it is an error if it already exists. It is not used to 
indicate the file should be opened for exclusive access.**

This is from 

Please explain, why am I not allowed to use it in sqlite3_open_v2()?


Because the docs for open_v2() state, somewhat ominously, that the 
results will be undefined:


http://www.sqlite.org/c3ref/mark/open.html?The%20sqlite3_open_v2+the+3rd*undefined#mark


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] write internal blob vs external file

2014-09-15 Thread Dan Kennedy

On 09/15/2014 06:19 PM, Dominique Devienne wrote:

On Mon, Sep 15, 2014 at 12:29 PM, Dan Kennedy <danielk1...@gmail.com> wrote:


On 09/15/2014 03:18 PM, Dominique Devienne wrote:


On Fri, Sep 12, 2014 at 11:48 PM, Eric Rubin-Smith <eas@gmail.com>
wrote:

  Looking at the sqlite web site and mailing lists shows that the SQLite

team
has taken a stab at answering the question, "is it faster to read a blob
out of sqlite or out of a file?".  See the links below.

Does the team have analogous guidance regarding write speeds?

  Depends if you care only about the initial write, or also about

incremental
writes.

SQLite is transactional, but does not have a level of indirection between
the row data, and the blob data. i.e. the blob is "in row". So whenever
you
want to update a single byte of a blob, you must rewrite the whole row,
i.e. the whole blob(s) in that row (and I think this applies to any column
of the row, not only blob columns). This is just prohibitive, and
unmanageable in practice.


I think the only exception to this is if you use the incremental-blob API:

   http://www.sqlite.org/c3ref/blob_open.html


What do you mean Dan? Are you saying that opening an existing blob, writing
1 byte somewhere, and closing it, does not yield the whole row to be
updated? Which therefore implies copying the whole old row value somewhere
for undo? (or writing the new row value to the WAL, old value + bytes
changed). Is this somehow not transactional? or SQLite supports sub-row or
sub-page deltas to implement its transactions? I'm confused by your remark.


SQLite always writes entire pages to disk. But if your blob is large 
enough it will be spread over multiple overflow pages. In this case if 
you update a single byte of the blob using the incremental-blob API, 
only the single modified page is updated on disk - not all the other 
pages that store data from the same row.


Dan.







I'm not saying SQLite lacks an API to do incremental updates to blobs, but
that its implementation and current file format lacks a way to implement
them efficiently as of now, like a file system can. Basically it want to
use SQLite as a mini transactional filesystem (w/o any need for directory
operations, just file ones, like SQLite own VFL more or less), in libraries
that like SQLite support a VFL abstraction. I'd implement those libraries
VFLs on top of SQLite blob incremental APIs. It's possible, but nowhere
near efficient enough ATM, for the partial update case which is required by
those libraries. Just imagine how slow SQLite itself would be, if you based
its own VFL on the SQLite's blob APIs. Nest an "inner" SQLite DB inside a
blob inside the "outer" SQLite, and you'll get what I mean I think, which
is that doing any SQL update on the inner DB implies doing copies of the
whole inner DB. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] write internal blob vs external file

2014-09-15 Thread Dan Kennedy

On 09/15/2014 03:18 PM, Dominique Devienne wrote:

On Fri, Sep 12, 2014 at 11:48 PM, Eric Rubin-Smith <eas@gmail.com>
wrote:


Looking at the sqlite web site and mailing lists shows that the SQLite team
has taken a stab at answering the question, "is it faster to read a blob
out of sqlite or out of a file?".  See the links below.

Does the team have analogous guidance regarding write speeds?


Depends if you care only about the initial write, or also about incremental
writes.

SQLite is transactional, but does not have a level of indirection between
the row data, and the blob data. i.e. the blob is "in row". So whenever you
want to update a single byte of a blob, you must rewrite the whole row,
i.e. the whole blob(s) in that row (and I think this applies to any column
of the row, not only blob columns). This is just prohibitive, and
unmanageable in practice.


I think the only exception to this is if you use the incremental-blob API:

  http://www.sqlite.org/c3ref/blob_open.html


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fts5

2014-09-11 Thread Dan Kennedy

On 09/11/2014 02:51 AM, Kyle Shannon wrote:

I was browsing the source on the fossil repository, and I noticed that
there were files for fts5 in the trunk.  I've been playing with fts3/4
and I'm curious about what new features/changes are incorporated into
fts5.  Can anyone clarify that for me?  Thanks.




Fts5 will use less memory and be faster than fts4 (I think - initial 
testing has been positive). It will also be smaller, as we can do 
without a bunch of code that is used to workaround problems inherent in 
the file-format.


It uses incremental-merging by default, which limits the amount of 
writing any single INSERT operation is required to perform.


And other stuff that makes the codebase smaller and easier to manage.

The most user-visible change is the addition of an API that allows users 
to write their own auxiliary (i.e. snippet(), rank(), offsets()) functions:


  http://www.sqlite.org/src/artifact/064f9bf705e59d

The included snippet() and rank() functions use this API.

There are also changes to allow things like this:

  SELECT snippet(fts) FROM fts WHERE fts MATCH 'xyz'
  ORDER BY rank(fts) LIMIT 10 OFFSET 20;

to be executed efficiently without jumping through hoops. Specifically, 
so that even if 'xyz' matches a large number of documents, only 10 
snippets need to be assembled (assembling snippets is expensive in both 
IO and CPU).


Fts5 is still in the experimental stage at the moment.

If anybody has any ideas for useful features, or knows of problems with 
FTS4 that could be fixed in FTS5, don't keep them to yourself!


Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unixGetPageSize needed but undefined

2014-07-28 Thread Dan Kennedy

On 07/27/2014 09:30 PM, Alan Hourihane wrote:

Hi all,

I logged this bug via Gentoo, but it should be done here...

https://bugs.gentoo.org/show_bug.cgi?id=517110


Is this still present on the trunk? It may have been fixed here:

  http://www.sqlite.org/src/info/612b6d1b1f

Dan.






"
This function is wrapped up inside SQLITE_OMIT_WAL, whereas it's needed 
elsewhere. This function needs to be moved out of this ifdef.

/*
** Return the system page size.
**
** This function should not be called directly by other code in this file.
** Instead, it should be called via macro osGetpagesize().
*/
static int unixGetpagesize(void){
#if defined(_BSD_SOURCE)
   return getpagesize();
#else
   return (int)sysconf(_SC_PAGESIZE);
#endif
}


libtool: link: m68k-atari-mint-gcc -D_REENTRANT=1 -DSQLITE_THREADSAFE=1 
-DSQLITE_OMIT_LOAD_EXTENSION=1 -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -O2 
-pipe -o sqlite3 shell.o  ./.libs/libsqlite3.a -lreadline -lcurses -lpthread
./.libs/libsqlite3.a(sqlite3.o):sqlite3.o:(.data+0x578): undefined reference to 
`unixGetpagesize'
collect2: error: ld returned 1 exit status
Makefile:487: recipe for target 'sqlite3' failed
make: *** [sqlite3] Error 1
"

Thanks,

Alan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [bug] sqlite memory corruption (use by digikam)

2014-07-15 Thread Dan Kennedy

On 07/16/2014 03:22 AM, Dan Kennedy wrote:

On 07/15/2014 09:06 PM, Mathieu Clabaut wrote:

Hello,

  Digikam bug https://bugs.kde.org/show_bug.cgi?id=323888#c89 seems 
to be

caused by a sqlite memory leak


It looks very much like the program is not calling sqlite3_close().

If you have a small program that shows SQLite leaking memory please 
make it available to us.




  and is said to be corrected with slqlite
3.8.5, but as shown in https://bugs.kde.org/show_bug.cgi?id=321680 a
similar problem appear in digikam 4.0.0 with sqlite 3.8.5.


I'm not sure what causes the crash in the stack trace in comment 8. I 
don't think these crashes are related to bug 323888 in any case. It's 
tough to say what the trace in comment 8 means really, except that the 
heap is corrupted.


The crashes in sqlite3MemCompare() are interesting though. Does the 
framework ever pass anything other than SQLITE_TRANSIENT or 
SQLITE_STATIC to sqlite3_bind_text(), bind_result() or similar?


Actually - it could be that a bad function pointer has been passed to 
one of the sqlite3_create_collation() functions. That would give you 
very similar stack traces anyhow.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [bug] sqlite memory corruption (use by digikam)

2014-07-15 Thread Dan Kennedy

On 07/15/2014 09:06 PM, Mathieu Clabaut wrote:

Hello,

  Digikam bug https://bugs.kde.org/show_bug.cgi?id=323888#c89 seems to be
caused by a sqlite memory leak


It looks very much like the program is not calling sqlite3_close().

If you have a small program that shows SQLite leaking memory please make 
it available to us.




  and is said to be corrected with slqlite
3.8.5, but as shown in https://bugs.kde.org/show_bug.cgi?id=321680 a
similar problem appear in digikam 4.0.0 with sqlite 3.8.5.


I'm not sure what causes the crash in the stack trace in comment 8. I 
don't think these crashes are related to bug 323888 in any case. It's 
tough to say what the trace in comment 8 means really, except that the 
heap is corrupted.


The crashes in sqlite3MemCompare() are interesting though. Does the 
framework ever pass anything other than SQLITE_TRANSIENT or 
SQLITE_STATIC to sqlite3_bind_text(), bind_result() or similar?


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITH syntax error

2014-07-12 Thread Dan Kennedy

On 07/13/2014 01:24 AM, Staffan Tylen wrote:

According to sqlite3 I'm on 3.8.3:

SQLite version 3.8.3 2014-02-03 14:04:11


I remember now. There was a bug regarding compound SELECT statements 
that use CTEs discovered shortly after 3.8.3 was released:


  http://www.sqlite.org/src/info/67bfd59d9087a987
  http://www.sqlite.org/src/info/31a19d11b97088296a

The fix appeared in 3.8.4. If you upgrade, the statement will work.

You'll note that I said the statement "should" work in 3.8.3. Not that 
it does. :)


Dan.












On Sat, Jul 12, 2014 at 8:06 PM, Dan Kennedy <danielk1...@gmail.com> wrote:


On 07/13/2014 12:29 AM, Staffan Tylen wrote:


The following statement is flagged as invalid, so what's the correct way
of
coding it?

WITH A AS (SELECT 'A'),
 B AS (SELECT 'B')
SELECT *
FROM A
UNION
SELECT *
FROM B
;


This statement should work in SQLite 3.8.3 or newer.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITH syntax error

2014-07-12 Thread Dan Kennedy

On 07/13/2014 12:29 AM, Staffan Tylen wrote:

The following statement is flagged as invalid, so what's the correct way of
coding it?

   WITH A AS (SELECT 'A'),
B AS (SELECT 'B')
   SELECT *
   FROM A
   UNION
   SELECT *
   FROM B
   ;


This statement should work in SQLite 3.8.3 or newer.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] pragma and prepare statement

2014-07-10 Thread Dan Kennedy

On 07/10/2014 08:44 PM, Baptiste Daroussin wrote:

Hi,

We are using sqlite intensively in out developement and we discovered
that apparently we cannot create a statement with a pragma

Is there a reason why it is not possible? is it a bug or a per design


The docs feature the following two points:

* No error messages are generated if an unknown pragma is issued. 
Unknown pragmas are simply ignored. This means if there is a typo in a 
pragma statement the library does not inform the user of the fact.


* Some pragmas take effect during the SQL compilation stage, not the 
execution stage. This means if using the C-language sqlite3_prepare() 
<https://www.sqlite.org/c3ref/prepare.html>, sqlite3_step() 
<https://www.sqlite.org/c3ref/step.html>, sqlite3_finalize() 
<https://www.sqlite.org/c3ref/finalize.html> API (or similar in a 
wrapper interface), the pragma may run during the sqlite3_prepare() 
<https://www.sqlite.org/c3ref/prepare.html> call, not during the 
sqlite3_step() <https://www.sqlite.org/c3ref/step.html> call as normal 
SQL statements do. Or the pragma might run during sqlite3_step() just 
like normal SQL statements. Whether or not the pragma runs during 
sqlite3_prepare() or sqlite3_step() depends on the pragma and on the 
specific release of SQLite.


Do either of them explain what you are seeing?

  https://www.sqlite.org/pragma.html

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I want to use CREATE TABLE ... AS SELECT ... - but what if the SELECT returns no rows?

2014-07-10 Thread Dan Kennedy

On 07/10/2014 04:45 PM, Tim Streater wrote:

I have just noticed this syntax which will simplify some table creation for me. 
However in some instances where I want to use it, I have a handy SELECT 
available, but I don't want to actually insert a row at that time. Testing with 
the shell, the following appears to work:

   attach database ':memory:' as mem;
   create table mem.messages as select * from main.messages limit 0;

and I get a new, empty, table with the same columns. Is this supposed to work 
in this way? The info at http://www.sqlite.org/lang_createtable.html doesn't 
say one way or the other. I'd like to take advantage of this, but not if it is 
undefined behaviour that just happens to work.


I don't see why that would be undefined behaviour. Adding "WHERE 0" to 
the SELECT would also work.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] documentation bugs

2014-07-09 Thread Dan Kennedy

On 07/09/2014 01:45 PM, Laurent Dami wrote:
* In http://www.sqlite.org/src/wiki?name=Bug+Reports , the href to the 
bug list is http://www.sqlite.org/src/report; should be 
http://www.sqlite.org/src/reportlist


*  In vtab.hml: the description of sqlite3_index_info is not up to 
date (missing estimatedRows)


* In compile.html : the compile option SQLITE_ENABLE_FTS4_UNICODE61 is 
missing


Thanks for these. And for pointing out the lack of documentation for the 
FTS4 "^" operator earlier.


New versions of vtab.html and fts3.html are now at:

  http://www.sqlite.org/draft/vtab.html
  http://www.sqlite.org/draft/fts3.html

FTS4 has been updated to match the documentation. So that "unicode61" is 
now enabled by default or disabled by SQLITE_DISABLE_FTS3_UNICODE. As of 
3.8.6, SQLITE_ENABLE_FTS4_UNICODE61 will no longer exist.


  http://www.sqlite.org/src/info/0cc0230ae9cf

Regards,
Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Planner chooses incorrect index

2014-07-08 Thread Dan Kennedy

On 06/25/2014 04:44 PM, João Ramos wrote:

The sample column has some sensitive data that I can't share, so I'll have
to change it and then try to reproduce the problem (I removed it before
because I assumed that it wouldn't be important).

On Mon, Jun 23, 2014 at 3:26 PM, Dan Kennedy <danielk1...@gmail.com> wrote:


On 06/23/2014 05:48 AM, João Ramos wrote:


Here you go:

sqlite_stat1 (before - good planning)
HistoryEntry idx_HistoryEntry_uri_historyOrder 14992 44 5
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 14992 6 2
HistoryEntry idx_HistoryEntry_sourceType_sourceId 14992 2999 2

sqlite_stat1 (after - bad planning)
HistoryEntry idx_HistoryEntry_uri_historyOrder 15492 45 5
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 15492 5 2
HistoryEntry idx_HistoryEntry_sourceType_sourceId 15492 3099 2



The issue, it turned out, is in the sqlite_stat1 data. The first set of 
stat1 data above suggests that using index 
"idx_HistoryEntry_uri_historyOrder" for a (uri=? AND historyOrder=?) 
lookup returns 5 rows. But using 
"idx_HistoryEntry_historyOrder_historyTimestamp" for a (historyOrder=?) 
lookup returns 6 rows. So in this case SQLite prefers the former.


But using the second set of sqlite_stat1 data, both lookups appear to 
return 5 rows. Both strategies are considered equally good. So SQLite 
just picks either. In this case, clearly it's making an unlucky choice.


Version 3.8.5 includes logic to prefer the (uri=? AND historyOrder=?) 
lookup over the (historyOrder=?) one on the basis that it uses a 
superset of the WHERE clause terms. Introduced here:


  http://www.sqlite.org/src/info/683dd379a293b2f3

So upgrading to 3.8.5 might fix the problem.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-3.8.5: is auto index created multiple times for a single statement?

2014-07-07 Thread Dan Kennedy

On 07/07/2014 03:12 AM, Nissl Reinhard wrote:

Hi,

while preparing this statement


create table gpBestellvorschlagInfo as
select GanttPlanID
, BestellterminRaw
, case when not (
  select max(HinweisCodiert)
  from Bestellvorschläge
  where ArtikelOID = o.ArtikelOID
  and HinweisCodiert <> 1
) is null then
(
  select group_concat(Hinweis, x'0d0a')
  from (
  select distinct Hinweis
  from Bestellvorschläge
  where ArtikelOID = o.ArtikelOID
  order by HinweisCodiert desc
  )
)
else
  null
end as Hinweis
from BestellVorschläge o
where not GanttPlanID is null


I get this log message twice:

automatic index on Bestellvorschläge(ArtikelOID)

I hope that the index is created only once when executing the prepared 
statement. But does the query optimizer consider the costs for creating the 
index twice?


Suspect that two copies of the same automatic index are created. 
Assuming this is the case, the cost calculations will take it into account.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple reads and writes to a single DB connection from multiple threads

2014-07-06 Thread Dan Kennedy

On 06/02/2014 08:36 PM, Hick Gunter wrote:

If you compile with SQLITE_THREADSAFE=1 then multiple calls from different 
threads will be serialized by SQLite.

"Serialized" means that only one thread at a time will be allowed to run within SQLite; 
API calls from other threads will block until the currently running thread returns. If your 
application uses virtual tables implemented with native tables, then there may be an issue with 
deadlocks (appl -> sqlite3_step -> virtual table code -> sqlite3_step would cause the 
same thread to recursively enter SQLite).


Both FTS4 and Rtree do this, and both work with SQLite connections in 
serialized mode. SQLite is re-entrant.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: signal 6 during sqlite_step in WAL mode

2014-07-01 Thread Dan Kennedy

On 07/01/2014 12:07 PM, Mattan Shalev wrote:

Hey guys,
I'm getting signal 6 during sqlite_step in WAL mode. Working on Ubuntu
12.04, sqlite3 3.7.9.
One process is the writing continuously,  while other process reads from
the DB in a multi threaded access. I made sure that sqlite is configured to
serialised mode.


Looks like a corrupted heap. If you run your app under valgrind it will 
probably point you to the problem.


Dan.







Here is the backtrace:

#0 0x7f4f78f0d08d in nanosleep () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x7f4f78f0cf2c in sleep () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x7f4f7982d881 in signal_catch (signo=6) at signal_masker.c:35
#3 
#4 0x7f4f78e83425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#5 0x7f4f78e86b8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
#6 0x7f4f78ec139e in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#7 0x7f4f78ecbb96 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#8 0x7f4f77661690 in sqlite3_free () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#9 0x7f4f77664b96 in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#10 0x7f4f77664c7c in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#11 0x7f4f77664cb0 in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#12 0x7f4f7768b477 in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#13 0x7f4f7769a249 in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#14 0x7f4f776a9689 in ?? () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#15 0x7f4f776ae451 in sqlite3_step () from
/usr/lib/x86_64-linux-gnu/libsqlite3.so.0
#16 0x7f4f79416557 in get_down_intervals_size (start_time=1403711040,
end_time=1577836800,
size=0x7f3c49c68958) at stats_ext.c:202
#17 0x00407f80 in ns__get_stats_performance (s=0x1e4cb00,
start_time=1403711040, end_time=1577836800,
time_unit=NS_MINUTES, resource_id=0, ret=0x7f3c49c68990) at
msg_handlers/capi_wrappers.c:636
#18 0x004461c1 in soap_serve_ns__get_stats_performance
(soap=0x1e4cb00) at soapServer.c:1294
#19 0x004415ae in soap_serve_request (soap=0x1e4cb00) at
soapServer.c:119
#20 0x00440f3e in soap_serve (soap=0x1e4cb00) at soapServer.c:39
#21 0x00403c90 in process_request (s=0x1e4cb00) at be_server.c:111
#22 0x7f4f79a3de9a in start_thread () from
/lib/x86_64-linux-gnu/libpthread.so.0
#23 0x7f4f78f413fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#24 0x in ?? ()

Does anyone has an idea?


Thanks!

Mattan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] hoe to create index to a big table?

2014-06-30 Thread Dan Kennedy

On 06/30/2014 03:30 PM, Hadashi, Rinat wrote:

I work with a very big table, on Linux.
I fail to create index getting the following error:
SQL error near line 1: database or disk is full


A CREATE INDEX on a large table uses temporary files to sort the data 
before creating the actual index b-tree. So even if you have enough 
space in the file-system that the database is stored in, you might still 
be running out of temp space. The maximum amount of temp space required 
by a CREATE INDEX is roughly twice the size of the final index on disk.


To set the directory used for temporary files on Linux, set the TMPDIR 
(or SQLITE_TMPDIR) environment variable.


Dan.





Any advice?

Rinat Hadashi


-
Intel Israel (74) Limited

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Planner chooses incorrect index

2014-06-23 Thread Dan Kennedy

On 06/23/2014 05:48 AM, João Ramos wrote:

Here you go:

sqlite_stat1 (before - good planning)
HistoryEntry idx_HistoryEntry_uri_historyOrder 14992 44 5
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 14992 6 2
HistoryEntry idx_HistoryEntry_sourceType_sourceId 14992 2999 2

sqlite_stat1 (after - bad planning)
HistoryEntry idx_HistoryEntry_uri_historyOrder 15492 45 5
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 15492 5 2
HistoryEntry idx_HistoryEntry_sourceType_sourceId 15492 3099 2


Unfortunately the last column of the sqlite_stat4 data is missing, 
likely because it contains embedded 0x00 bytes. And without the 
sqlite_stat4 data, it seems that SQLite picks the "good" query plan in 
either case.


Can you dump the sqlite_stat1 and sqlite_stat4 tables using the shell 
".dump" command instead?


Thanks,
Dan.





sqlite_stat4 (before - good planning)
HistoryEntry idx_HistoryEntry_uri_historyOrder 366 340 1 78 78 81 17 17 81
HistoryEntry idx_HistoryEntry_uri_historyOrder 722 489 1 566 566 661 38 96
661
HistoryEntry idx_HistoryEntry_uri_historyOrder 25 25 1 1650 1650 1665 56
352 1665
HistoryEntry idx_HistoryEntry_uri_historyOrder 862 747 1 2569 2569 2899 106
447 2899
HistoryEntry idx_HistoryEntry_uri_historyOrder 862 1 1 2569 3331 3331 106
462 3331
HistoryEntry idx_HistoryEntry_uri_historyOrder 2026 1789 1 3825 3825 4997 131
660 4997
HistoryEntry idx_HistoryEntry_uri_historyOrder 143 113 1 6178 6178 6232 154
931 6232
HistoryEntry idx_HistoryEntry_uri_historyOrder 181 180 1 6412 6412 6477 162
974 6477
HistoryEntry idx_HistoryEntry_uri_historyOrder 2 2 1 6663 6663 6663 169 984
6663
HistoryEntry idx_HistoryEntry_uri_historyOrder 1251 988 1 6953 6953 7488 186
1062 7488
HistoryEntry idx_HistoryEntry_uri_historyOrder 379 370 1 8260 8260 8329 195
1361 8329
HistoryEntry idx_HistoryEntry_uri_historyOrder 135 92 1 8915 8915 8965 218
1439 8965
HistoryEntry idx_HistoryEntry_uri_historyOrder 196 194 1 9128 9128 9129 227
1501 9129
HistoryEntry idx_HistoryEntry_uri_historyOrder 280 170 1 9622 9622 9650 237
1579 9650
HistoryEntry idx_HistoryEntry_uri_historyOrder 1228 1045 1 9954 9954 9995 244
1709 9995
HistoryEntry idx_HistoryEntry_uri_historyOrder 178 170 1 11192 11192 11322 246
1898 11322
HistoryEntry idx_HistoryEntry_uri_historyOrder 312 299 1 11503 11503 11661 252
1963 11661
HistoryEntry idx_HistoryEntry_uri_historyOrder 546 543 1 11921 11921 12037 258
1994 12037
HistoryEntry idx_HistoryEntry_uri_historyOrder 169 113 1 12574 12574 12586 261
2013 12586
HistoryEntry idx_HistoryEntry_uri_historyOrder 216 115 1 12862 12862 12973 265
2187 12973
HistoryEntry idx_HistoryEntry_uri_historyOrder 388 379 1 13101 13101 13327 270
2291 13327
HistoryEntry idx_HistoryEntry_uri_historyOrder 366 351 1 13722 13722 13973 295
2388 13973
HistoryEntry idx_HistoryEntry_uri_historyOrder 200 1 1 14482 14530 14530 331
2749 14530
HistoryEntry idx_HistoryEntry_uri_historyOrder 233 2 1 14684 14848 14849 333
3067 14849
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
1665 1665 0 1663 1665
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
3331 3331 0 3329 3331
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
4997 4997 0 4995 4997
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
6663 6663 0 6661 6663
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
8329 8329 0 8327 8329
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
9995 9995 0 9993 9995
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 0
11661 11661 0 11659 11661
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 5 1 1 12392
12392 12392 377 12390 12392
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 3 1 1 12557
12558 12558 530 12556 12558
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 5 1 1 13042
13044 13044 998 13042 13044
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 3 1 1 13049
13049 13049 1000 13047 13049
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 3 1 1 13061
13061 13061 1006 13059 13061
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 3 1 1 13210
13212 13212 1150 13210 13212
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 1 1 1 13327
13327 13327 1261 13325 13327
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 3 1 1 13411
13411 13411 1344 13409 13411
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 6 1 1 13572
13573 13573 1498 13571 13573
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 4 1 1 13582
13585 13585 1503 13583 13585
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 4 1 1 13726
13727 13727 1643 13725 13727
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 8 1 1 13978
13978 13978 1890 13976 13978
HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 4 1 1 14203
14206 14206 2098 14204 14206
HistoryEntry idx_HistoryEntry_historyOrder_hi

Re: [sqlite] R-Tree Storage Optimization for Points

2014-06-19 Thread Dan Kennedy

On 06/20/2014 12:10 AM, Alysson Gonçalves de Azevedo wrote:

sqlite> INSERT INTO abc VALUES(NULL, 20, 10);


*The first column is always a 64-bit signed integer primary key*.


Right, but if you insert NULL it assigns a value automatically. The 
constraint failure is because the minimum value of the first dimension 
is larger than the maximum.



SQLite version 3.8.5 2014-06-19 12:34:33
...
sqlite> CREATE VIRTUAL TABLE abc USING rtree(id,x,y);
sqlite> INSERT INTO abc VALUES(1, 20, 10);
Error: constraint failed

sqlite> INSERT INTO abc VALUES(NULL, 10, 20);
sqlite> SELECT * FROM abc;
1|10.0|20.0






  The other

columns are pairs, one pair per dimension, containing the minimum and
maximum values for that dimension, respectively.



Alysson Gonçalves de Azevedo

"Anarcho-syndicalism is a way of preserving freedom." - Monty Python


2014-06-19 14:06 GMT-03:00 Dan Kennedy <danielk1...@gmail.com>:


On 06/19/2014 11:57 PM, Mohit Sindhwani wrote:


Hi Wolfgang,

On 19/6/2014 11:54 PM, Wolfgang Enzinger wrote:


Not sure why you think you have to store those point coordinates twice.

This works:

sqlite> CREATE VIRTUAL TABLE abc USING rtree(id,x,y);
sqlite> INSERT INTO abc VALUES(1,20,30);
sqlite> SELECT id FROM abc WHERE x>=10 AND x<=30 AND y >=20 AND y<=40;
1
sqlite> SELECT id FROM abc WHERE x>=40 AND x<=50 AND y >=40 AND y<=50;
sqlite>


I do feel a bit stupid after reading your email... but I guess I was
working on the basis that the data we have is 2 dimensional and my
recollection was that we need 2 items per dimension.

Am I reading this wrong?
The SQLite R*Tree module is implemented as a virtual table. Each R*Tree
index is a virtual table with an odd number of columns between 3 and 11.
The first column is always a 64-bit signed integer primary key. The other
columns are pairs, one pair per dimension, containing the minimum and
maximum values for that dimension, respectively. A 1-dimensional R*Tree
thus has 3 columns. A 2-dimensional R*Tree has 5 columns. A 3-dimensional
R*Tree has 7 columns. A 4-dimensional R*Tree has 9 columns. And a
5-dimensional R*Tree has 11 columns. The SQLite R*Tree implementation does
not support R*Trees wider than 5 dimensions.


Probably not. The CREATE TABLE code above actually creates a 1-dimensional
r-tree with deceptive column names. Column "y" contains the maximum value
for the first dimension:

SQLite version 3.8.5 2014-06-19 12:34:33
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.

sqlite> CREATE VIRTUAL TABLE abc USING rtree(id,x,y);
sqlite> INSERT INTO abc VALUES(NULL, 20, 10);
Error: constraint failed
sqlite>



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] R-Tree Storage Optimization for Points

2014-06-19 Thread Dan Kennedy

On 06/19/2014 11:57 PM, Mohit Sindhwani wrote:

Hi Wolfgang,

On 19/6/2014 11:54 PM, Wolfgang Enzinger wrote:

Not sure why you think you have to store those point coordinates twice.

This works:

sqlite> CREATE VIRTUAL TABLE abc USING rtree(id,x,y);
sqlite> INSERT INTO abc VALUES(1,20,30);
sqlite> SELECT id FROM abc WHERE x>=10 AND x<=30 AND y >=20 AND y<=40;
1
sqlite> SELECT id FROM abc WHERE x>=40 AND x<=50 AND y >=40 AND y<=50;
sqlite>


I do feel a bit stupid after reading your email... but I guess I was 
working on the basis that the data we have is 2 dimensional and my 
recollection was that we need 2 items per dimension.


Am I reading this wrong?
The SQLite R*Tree module is implemented as a virtual table. Each 
R*Tree index is a virtual table with an odd number of columns between 
3 and 11. The first column is always a 64-bit signed integer primary 
key. The other columns are pairs, one pair per dimension, containing 
the minimum and maximum values for that dimension, respectively. A 
1-dimensional R*Tree thus has 3 columns. A 2-dimensional R*Tree has 5 
columns. A 3-dimensional R*Tree has 7 columns. A 4-dimensional R*Tree 
has 9 columns. And a 5-dimensional R*Tree has 11 columns. The SQLite 
R*Tree implementation does not support R*Trees wider than 5 dimensions.


Probably not. The CREATE TABLE code above actually creates a 
1-dimensional r-tree with deceptive column names. Column "y" contains 
the maximum value for the first dimension:


SQLite version 3.8.5 2014-06-19 12:34:33
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE VIRTUAL TABLE abc USING rtree(id,x,y);
sqlite> INSERT INTO abc VALUES(NULL, 20, 10);
Error: constraint failed
sqlite>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Proximity ranking with FTS

2014-06-17 Thread Dan Kennedy

On 06/17/2014 10:48 AM, Josh Wilson wrote:

Yeah I had thought about using the byte distance between words but you get
these instances:

[Example A]
|word1|10charword|word2|

[Example B]
|word1|3charword|4charword|3charword|word2|

By using byte distances, both of these score the same, where Example A
should score more highly.

But it would seem I can use the fts3_tokenizer somehow to get the token
positions or that this underlying value is available but just not stored in
an accessible manner.


I think it's possible to do. When it visits a row as part of a full-text 
search, internally FTS has a list of matches within the current row for 
each phrase in the query. Each match is stored as a column and token 
offset - the number of tokens that precede the match within the column text.


Is that what you need? Do you have any ideas for an fts4 interface it?

Dan.






I implemented OkapiBM25f [1] but was hoping to implement something like the
following proximity ranking [2] as it combines Bag-Of-Words ranking and
proximity ranking. Although that article proposes to precalculate the
distance pairs for all tokens, I'm happy to accept the TimeCost and
calculate on the fly as that SpaceCost won't be worth it.

[1] https://github.com/neozenith/sqlite-okapi-bm25
[2] http://infolab.stanford.edu/~theobald/pub/proximity-spire07.pdf



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Proximity-ranking-with-FTS-tp76149p76152.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shell core dumps on incomplete init file

2014-06-04 Thread Dan Kennedy

On 06/04/2014 05:06 PM, Rob Golsteijn wrote:

Hi List,

  
I noticed that the sqlite shell core dumps when it is started with an init file that ends with an incomplete statement.


  
Example:


Init file called "my_init.sql"  with the following contents:

 -- note that the line below is NOT a valid sqlite comment line, and hence 
an incomplete sqlite statement

 #.headers on

  
sqlite3 -init my_init.sql my_db.sq3


Cannot reproduce here. Any special options when you built SQLite?

Do you have "valgrind" installed? If so, can you run this under it and 
post the output? Thanks.


Dan.





  
  
  
  
Result:


-- Loading resources from my_init.sql

Error: incomplete SQL: #.headers on

*** glibc detected *** ./bin/sqlite3: double free or corruption (fasttop): 
0x035ecf80 ***

  
  
  
Same thing happens when typing an incomplete sql statement in the shell and then press CTRL-D.


Sqlite version is 3.8.4.3.

  
  
Regards,


Rob Golsteijn

  
  
___

sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Comments loss in sqlite_master table

2014-05-29 Thread Dan Kennedy

On 05/29/2014 03:42 PM, big stone wrote:

Hello,

I miss the functionnality of some other sql motors that keep the comments
inside an object definition, like a table.

Example : (with sqlite.exe 3.8.3)

create table /* This table has an educative purpose */ toto(x);
create table /* This table has an educative purpose */
toty(x  );
create table
 totz(x  );
select sql from sqlite_master;

... gives :
CREATE TABLE TOTO(X)
CREATE TABLE TOTY(X  )
CREATE TABLE
TOTZ(X)

... instead of the original  full statements :
create table /* This table has an educative purpose */ toto(x)
create table /* This table has an educative purpose */
toty(x  )
create table
 totz(x  )


This sounds quite illogical, as :
-  Newlines and spaces are kept, (if not around a comment)
-  the SQLite internal parser would re-parse (and remove comments) from the
original statement without issue.


I think in practice the text that appears before the first "(" character 
in the CREATE TABLE statement is replaced by "CREATE TABLE ". 
This is because if the user enters "CREATE TABLE main." we 
don't want to store "main." in the sqlite_master file. So I'd say if the 
comment appears inside the parenthesis it will probably be preserved.


Of course, even if that is correct, it's just an artifact of the 
implementation, not part of a published interface that one would expect 
not to change in future versions.


Dan.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] vtable vs. OR condition

2014-05-26 Thread Dan Kennedy

On 05/26/2014 01:45 AM, András Kardos wrote:

Hello,

As an expoeriment I'd like to create various virtual tables (vtable) for
SQLite. Like filysystem, process list, mailbox - all local or remote. The
idea is to use SQLite as an unified API and query processor to access or
join data from various sources. I read an old old article in Dr. Dobbs (
http://goo.gl/khg1pN), and it mentions an issue with OR conditions vs.
virtual tables, that is if a query has theese, than the filter conditions
will not be passed to the xBestIndex call, and thus will not be passed to
xFilter, and thus access to the virtual table will always be done as table
scan, and that is quite inefficient. Rewriting queries is not an option as
the article suggest, as that would force users to an unusual SQL style just
to make things faster. Are there any advancements in this topic?


It's no longer true. If you issue a query like:

  SELECT * FROM vtab WHERE vtab.a=? OR vtab.b=?

then SQLite makes separate calls to xBestIndex for (vtab.a=?) and 
(vtab.b=?). If the virtual table implementation reports that both of 
these can be performed efficiently, SQLite makes two separate sets of 
calls to xFilter/xStep (one for each vtab.*=? condition) and merges the 
results before returning them to the caller.


Dan.








  Any
experiences with this issue? Or any suggestions for other query processing
engines I might use?

Thanks,
Andris
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Pre-preparing querys

2014-05-20 Thread Dan Kennedy

On 05/20/2014 07:30 AM, James K. Lowden wrote:

On Mon, 19 May 2014 22:26:29 +0100
Simon Slavin <slav...@bigfraud.org> wrote:


On 19 May 2014, at 10:21pm, Roger Binns <rog...@rogerbinns.com> wrote:


It seems like most language wrappers for SQLite include some sort of
statement cache because it is generally useful.  It seems like the
sort of thing that would be helpful within the core of SQLite
itself, or as an officially supported extra extension.

Are there any advantages to this other than increased speed ?

I'm going to go out on a limb and say No, and moreover it's not obvious
there would be any improvement.  Do we have statistics on the query
planner's performance?  How much improvement could the user
(reasonably) hope for?

I would expect the answer to be Not Much.


On a modern computer I/O dominates everything, including SQL parsing.


I expect that's true on many (all?) systems, but not all queries cause 
any actual I/O. Often the entire database sits in the OS cache or SQLite 
pager cache, so querying the database just means shuffling bytes around 
in main memory.


I vaguely recall that at one point for the simplest possible queries:

  SELECT * FROM t1 WHERE rowid = ?

if the database is in already memory the prepare() and step() calls are 
roughly similar in terms of real time. And most of that is the implicit 
transaction - executed within a BEGIN/COMMIT block the prepare() step is 
much more expensive than the step().


I'm not sure about more complicated queries. But I suspect it's very 
easy to find cases where the prepare() is at least as heavy as the step().


Dan.











In theory very complex queries would be the exception, except that query
planners long ago developed heuristic shortcuts.  Mathematically, an
N-way join is a combanatorial problem with N! solutions.  When N is,
say, 8, that's a lot to consider, 40,320 alternatives.  A shortcut in
such a case becomes the only cut.

Even the perfect plan, by the way, may not be worth preserving.  As the
developers know,

No battle plan survives first contact with the enemy.
--Helmuth von Moltke

meaning that any query plan, no matter how fine, is based only on
the state of the data at the time.  Change the data enough and it
becomes slow, or useless.

That's just an outline of the general case.  I'll be interested to see
what we know about SQLite's in particular.

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] More LSM leak

2014-05-08 Thread Dan Kennedy

On 05/08/2014 06:59 AM, Charles Samuels wrote:

This leak cursor leak can be consistently reproduced by my test program, but
it doesn't occur every time you create and delete the cursor.

The files you'll need are:
http://www.derkarl.org/~charles/lsm/smaller.trace.bz2
http://www.derkarl.org/~charles/lsm/runlsm.cpp

(The latter of which has changed since the last time I provided it to this
list)


Hi,

Thanks for doing this.

I don't think the script is closing all the connections. There are
4 "open" commands and only 1 "close". I started adding a few "close"
commands, which seemed to help, but then I got an LSM_MISUSE error
(the tester program threw an exception) indicating that I was trying
to close a connection without closing all of its cursors first.

Even so - the "8 bytes definitely lost" is suspicious. I think that
one at least might be a real leak.

I'm guessing the smaller.trace script is somehow created automatically.
If it's easy enough to add the missing "close" commands (assuming they
really are missing), I'd like to see if it still leaks.

Thanks,
Dan.







$ cat smaller.trace | valgrind --leak-check=yes   ~/a.out lsm
==24046== Memcheck, a memory error detector
==24046== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==24046== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==24046== Command: /home/charles/a.out lsm
==24046==
==24046==
==24046== HEAP SUMMARY:
==24046== in use at exit: 39,507 bytes in 24 blocks
==24046==   total heap usage: 4,406,026 allocs, 4,406,002 frees, 149,648,738
bytes allocated
==24046==
==24046== 8 bytes in 1 blocks are definitely lost in loss record 1 of 22
==24046==at 0x4C28BED: malloc (vg_replace_malloc.c:263)
==24046==by 0x41A61B: lsmPosixOsMalloc (lsm_unix.c:472)
==24046==by 0x40AF95: lsmMallocZero (lsm_mem.c:50)
==24046==by 0x40B000: lsmMallocZeroRc (lsm_mem.c:69)
==24046==by 0x40F5DC: multiCursorAddAll.isra.21 (lsm_sorted.c:2387)
==24046==by 0x40F64C: multiCursorInit (lsm_sorted.c:2400)
==24046==by 0x411C6E: lsmMCursorNew (lsm_sorted.c:2495)
==24046==by 0x40A52F: lsm_csr_open (lsm_main.c:774)
==24046==by 0x405539: main (runlsm.cpp:255)
==24046==


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM Leaks memory

2014-05-07 Thread Dan Kennedy

On 05/07/2014 04:51 AM, sql...@charles.derkarl.org wrote:

I
In a more complex program, lsm seems to leak memory to no bounds, causing my
application.

Are bug reports against LSM even helpful?


I think they are. Thanks for the report. Now fixed here:

http://www.sqlite.org/src4/info/8a39847dafa3047ba5d6107f0032c6b39d0ef104

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3/4 merge function behaviour when deleting rows

2014-05-02 Thread Dan Kennedy

On 05/02/2014 07:30 PM, Stadin, Benjamin wrote:

Is the delete marker also set on old keys on UPDATE? Or just DELETE ->
INSERT?


Internally, FTS implements an UPDATE as a DELETE followed by an INSERT.

Dan.





I ran into the ever-growing FTS index issue last year. I’m creating DB
diffs which also contain some FTS3/4 tables. The tables get constantly
updated for the checksum.
The DBs were always vacuum’ed, but the growing FTS index caused the db to
grow from 50MB to 150MB+ in my case (and I think the „non-grwon“ FTS
tables including indices accounted for about 1/3 of the whole DB size
according to the SQLite analyzer). On "server side“ (DBs are created on a
server and then deployed to some smartphone apps) the issue was easily
resolvable with the optimize command. On client side (smartphone) however
each diff still causes the DB to grow - optimize() is there also not
possible for the same reasons.

Ben


Am 02.05.14 11:22 schrieb "Dan Kennedy":


So I'm thinking a solution might be:

* Fix FTS so that it picks this case - when a merge includes so many
  delete markers that the output is small enough to be deemed a
level-N
  b-tree, not a level-N+1 b-tree, and

* Instead of using the default 16-way merges, the app could organize
  to periodically invoke the "merge=X,Y" command with a smaller Y
value
  (say 2) to limit the maximum size of the index to Y times its
optimal
  size (instead of 16 times).

It is an interesting problem. And the above is just guesswork... It would
be good to verify experimentally that the index really does grow
indefinitely
with this kind of input before trying to "fix" anything.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS3/4 merge function behaviour when deleting rows

2014-05-02 Thread Dan Kennedy

On 05/02/2014 04:13 PM, Andrew Moss wrote:

On 2 May 2014 07:57, Dan Kennedy <danielk1...@gmail.com> wrote:


On 05/01/2014 03:30 PM, andrewmo wrote:


We are using the FTS3 extension to sqlite to store large numbers of short
(~300 byte) documents. This is working very well and providing us with
very
fast text search, but the behaviour around deletion of documents has me
confused.

Our system must control the the size of the database and will delete the
oldest documents when the database size breaches a certain limit. I now
understand from comments on this mailing list and elsewhere that this is
not
an optimal pattern for the FTS extension as doclists for the oldest
documents are the least likely to be 'merged'.

My question is, does this actually work at all? If I delete a row from my
FTS4 table (resulting in a new empty doclist being added to the index),
then
I subsequently add many (1000s) new documents and call the 'merge'
function
several times (automerge is also enabled), is there any gaurentee that the
empty doclist and the populated doclist that it superseded will ever be
removed? My testing suggests this isn't the case.

I have a 1GB database with 6million documents. If I keep adding new
documents at around 1 per second and deleting documents when the size of
the
data goes beyond 1GB, the size of the index seems to grow and the number
of
documents I can store in the 1GB file seems decrease in a linear manner.

Calling the 'optimize' function seems to solve this issue (removing all
the
dead doclists), but that isn't practical for our software, as it implies
some downtime for our high availablity service due to the long execution
time of the optimize function (Could be minutes for a 1GB file).

I have seen this
(http://sqlite.1065341.n5.nabble.com/fts3-database-grows-td42069.html)
post
from 2008. However, it predates the 'automerge' and manual merge features,
and from the documentation I assumed these new features would delete all
the
data related to deleted documents. Am I incorrect in my assumption?

Thanks for any clarification you can offer.


Normally, when you write to an FTS index (either to add new doclists or to
add delete markers) the new entries are accumulated in-memory for a while
and then flushed to a new "level-0" b-tree. A level-0 b-tree is often
roughly 1MB in size. Once there are 16 level-0 b-trees, they are merged
and written to a single level-1 b-tree. Once there are 16 level-1
b-trees...
And so on.

So when an entry is deleted from the FTS index, a delete marker is added.
But the original doclists are not actually deleted until the delete marker
and the doclists are merged into the same b-tree. Delete markers are
discarded when they are merged into the oldest b-tree in the index.

At first glance it seems (to me) that this means the index might grow to
anything up to 16 times its "optimized" size. But I think it's actually
worse than that.

Say your entire database fits into a single level-N b-tree. You keep adding
data (and delete markers) until there are 15 level-N b-trees and almost
enough data to create the 16th in lower levels. So at this point the FTS
index is 16 times its optimal size. If you then add even more data so that
the 16th level-N b-tree is created, everything gets merged together and
we're back in the optimal state - everything in a single b-tree. However -
this b-tree is deemed to be a level-N+1 b-tree. Meaning that this time,
much more data will have to be added before everything is merged together
again.

So I'm thinking a solution might be:

   * Fix FTS so that it picks this case - when a merge includes so many
 delete markers that the output is small enough to be deemed a level-N
 b-tree, not a level-N+1 b-tree, and

   * Instead of using the default 16-way merges, the app could organize
 to periodically invoke the "merge=X,Y" command with a smaller Y value
 (say 2) to limit the maximum size of the index to Y times its optimal
 size (instead of 16 times).

It is an interesting problem. And the above is just guesswork... It would
be good to verify experimentally that the index really does grow
indefinitely
with this kind of input before trying to "fix" anything.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Thanks for your response Dan. Our application currently runs "merge=300,8"
( in a loop until no more modifications are being made) whenever it deletes
records due to the database size or after a certain number of insertions
have been made.

I did read the advice in the documentation that this should be "run in an
idle thread". But adding an extra thread to trigger this seemed like
unnecessary complication to our system, where spiky insertion performance
isn't too much of an issue.

We are currently running a test (been going for 6 days so f

Re: [sqlite] FTS3/4 merge function behaviour when deleting rows

2014-05-02 Thread Dan Kennedy

On 05/01/2014 03:30 PM, andrewmo wrote:

We are using the FTS3 extension to sqlite to store large numbers of short
(~300 byte) documents. This is working very well and providing us with very
fast text search, but the behaviour around deletion of documents has me
confused.

Our system must control the the size of the database and will delete the
oldest documents when the database size breaches a certain limit. I now
understand from comments on this mailing list and elsewhere that this is not
an optimal pattern for the FTS extension as doclists for the oldest
documents are the least likely to be 'merged'.

My question is, does this actually work at all? If I delete a row from my
FTS4 table (resulting in a new empty doclist being added to the index), then
I subsequently add many (1000s) new documents and call the 'merge' function
several times (automerge is also enabled), is there any gaurentee that the
empty doclist and the populated doclist that it superseded will ever be
removed? My testing suggests this isn't the case.

I have a 1GB database with 6million documents. If I keep adding new
documents at around 1 per second and deleting documents when the size of the
data goes beyond 1GB, the size of the index seems to grow and the number of
documents I can store in the 1GB file seems decrease in a linear manner.

Calling the 'optimize' function seems to solve this issue (removing all the
dead doclists), but that isn't practical for our software, as it implies
some downtime for our high availablity service due to the long execution
time of the optimize function (Could be minutes for a 1GB file).

I have seen this
(http://sqlite.1065341.n5.nabble.com/fts3-database-grows-td42069.html) post
from 2008. However, it predates the 'automerge' and manual merge features,
and from the documentation I assumed these new features would delete all the
data related to deleted documents. Am I incorrect in my assumption?

Thanks for any clarification you can offer.


Normally, when you write to an FTS index (either to add new doclists or to
add delete markers) the new entries are accumulated in-memory for a while
and then flushed to a new "level-0" b-tree. A level-0 b-tree is often
roughly 1MB in size. Once there are 16 level-0 b-trees, they are merged
and written to a single level-1 b-tree. Once there are 16 level-1 b-trees...
And so on.

So when an entry is deleted from the FTS index, a delete marker is added.
But the original doclists are not actually deleted until the delete marker
and the doclists are merged into the same b-tree. Delete markers are
discarded when they are merged into the oldest b-tree in the index.

At first glance it seems (to me) that this means the index might grow to
anything up to 16 times its "optimized" size. But I think it's actually
worse than that.

Say your entire database fits into a single level-N b-tree. You keep adding
data (and delete markers) until there are 15 level-N b-trees and almost
enough data to create the 16th in lower levels. So at this point the FTS
index is 16 times its optimal size. If you then add even more data so that
the 16th level-N b-tree is created, everything gets merged together and
we're back in the optimal state - everything in a single b-tree. However -
this b-tree is deemed to be a level-N+1 b-tree. Meaning that this time,
much more data will have to be added before everything is merged together
again.

So I'm thinking a solution might be:

  * Fix FTS so that it picks this case - when a merge includes so many
delete markers that the output is small enough to be deemed a level-N
b-tree, not a level-N+1 b-tree, and

  * Instead of using the default 16-way merges, the app could organize
to periodically invoke the "merge=X,Y" command with a smaller Y value
(say 2) to limit the maximum size of the index to Y times its optimal
size (instead of 16 times).

It is an interesting problem. And the above is just guesswork... It would
be good to verify experimentally that the index really does grow 
indefinitely

with this kind of input before trying to "fix" anything.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM bug

2014-04-29 Thread Dan Kennedy

On 04/29/2014 03:53 AM, sql...@charles.derkarl.org wrote:

I didn't know this list strips attachments, so the source file is here:

http://derkarl.org/~charles/runlsm.cpp


Thanks for this. It is a problem.

LSM accumulates data in a tree structure in shared-memory until there is 
"enough" (~1-2 MB) to flush through to the database file. But at the 
moment, it can only flush data to the db file between transactions. And 
the in-memory tree can only hold 2GB of data (including overhead). So 
things fail if a single transaction exceeds that limit. In the short 
term, it should be changed to return LSM_FULL for any transaction too 
large to handle. But the real fix should be to change things so that LSM 
can begin flushing data to the database file mid-transaction.


Dan.







On Monday, April 28, 2014 01:41:02 PM sql...@charles.derkarl.org wrote:

Hi,

I'm not exactly sure this is the right forum for my problem, as I know that
LSM is experimental, but here we go.

I tried loading a whole lot of data using LSM. The majority of the data
goes int a single huge transaction (begin/commit pair). My program
segfaults once we're 1.61GB into my data file.

I have attached the source code to my test program. You also will need my
data file, which is too big for email:

http://www.derkarl.org/~charles/lsmlog.bz2 (744 MiB)

Here is how you can run my test program:

bunzip2 < lsmlog.bz2 | pv | ./a.out lsmdbtocreate

(You can exclude "pv" from the pipeline if you don't have it installed)

Here is the backtrace:

treeShmalloc (pDb=pDb@entry=0x12b20a8, bAlign=bAlign@entry=1,
nByte=nByte@entry=12, pRc=pRc@entry=0x7fff2fd43f44)
 at src/lsm_tree.c:682
682 pNext->iNext = 0;
(gdb) bt
#0  treeShmalloc (pDb=pDb@entry=0x12b20a8, bAlign=bAlign@entry=1,
nByte=nByte@entry=12, pRc=pRc@entry=0x7fff2fd43f44)
 at src/lsm_tree.c:682
#1  0x0041122d in treeShmallocZero (pDb=pDb@entry=0x12b20a8,
nByte=nByte@entry=12, piPtr=piPtr@entry=0x7fff2fd43f4c,
 pRc=pRc@entry=0x7fff2fd43f44) at src/lsm_tree.c:711
#2  0x00413114 in newTreeLeaf (pRc=0x7fff2fd43f44,
piPtr=0x7fff2fd43f4c, pDb=0x12b20a8) at src/lsm_tree.c:726
#3  treeInsertLeaf (iSlot=1, iTreeKey=2146172860, pCsr=0x7fff2fd43f50,
pDb=0x12b20a8) at src/lsm_tree.c:1039
#4  treeInsertEntry (pDb=pDb@entry=0x12b20a8, flags=8,
pKey=pKey@entry=0x12b2058, nKey=nKey@entry=17, pVal=,
 pVal@entry=0x12bb638, nVal=21) at src/lsm_tree.c:1552
#5  0x0041329f in lsmTreeInsert (pDb=pDb@entry=0x12b20a8,
pKey=pKey@entry=0x12b2058, nKey=nKey@entry=17,
 pVal=pVal@entry=0x12bb638, nVal=) at src/lsm_tree.c:1587
#6  0x00404db0 in doWriteOp (pDb=0x12b20a8, bDeleteRange=, pKey=0x12b2058, nKey=17, pVal=0x12bb638,
 nVal=) at src/lsm_main.c:696
#7  0x0040305d in main (argc=2, argv=0x7fff2fd44418) at
runlsm.cpp:41 (gdb) print pNext
$1 = (ShmChunk *) 0x



It's a pity that LSM isn't ready for production, because if the quality of
sqlite3 is indication, I'm going to really enjoy using it!

Charles

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] problem with INSERT after ALTER TABLE ... ADD COLUMN ... DEFAULT ... performed on source table

2014-04-26 Thread Dan Kennedy

On 04/26/2014 05:23 AM, Hinrichsen, John wrote:

Default non-NULL values copied from a column that was added using "ALTER
TABLE ... ADD COLUMN ... DEFAULT ..." are inserted into another table as
NULLs when copied using "INSERT INTO ... SELECT * FROM ..."

However, the same values are propagated correctly when "CREATE TABLE ... AS
SELECT * FROM ..." is executed.

See example below:

$ sqlite3
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .header on
sqlite> .mode column
sqlite> CREATE TABLE a(a);
sqlite> INSERT INTO a VALUES(1);
sqlite>
sqlite> ALTER TABLE a ADD COLUMN b DEFAULT 2;
sqlite>
sqlite> CREATE TABLE b AS SELECT * FROM a;
sqlite> INSERT INTO  bSELECT * FROM a;
sqlite>
sqlite> SELECT * FROM a;
a   b
--  --
1   2
sqlite> SELECT * FROM b;
a   b
--  --
1   2
1
sqlite>


Thanks for reporting this. Now fixed on the trunk. Ticket is here:

http://www.sqlite.org/src/info/f67b41381acce9a881fb081c4a20ecbeb2429b0d

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger cascade/nesting

2014-04-23 Thread Dan Kennedy

On 04/23/2014 05:05 PM, Peter Hardman wrote:

Hi,

Is there a way of preventing triggers cascading?

I have a table where I need to set a timestamp field when a new record 
is INSERTED. The table uses an auto-increment key field so a BEFORE 
INSERT trigger is ruled out. An AFTER INSERT trigger which UPDATEs the 
new row solves that problem, but I have a second, AFTER UPDATE, 
trigger on the table to save an audit record when the table is 
updated. I need a way of preventing the AFTER UPDATE trigger running 
as a result of the UPDATE command in the AFTER INSERT trigger.


Is this possible? I thought a SELECT RAISE(IGNORE) in the AFTER INSERT 
trigger would do the trick, but it has no effect. I have recursive 
triggers disabled.


SQL:

CREATE TABLE audit_test_a (
rec_id INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
field_1 INTEGER NOT NULL,
field_2 VARCHAR(32) NOT NULL,
field_3 TEXT NOT NULL,
last_changed TIMESTAMP NOT NULL)

CREATE TRIGGER audit_test_a_set_insert_ts AFTER INSERT ON audit_test_a 
FOR EACH ROW

BEGIN
UPDATE audit_test_a
set last_changed = strftime('%Y-%m-%d %H:%M:%f','now', 
'localtime');

SELECT RAISE(IGNORE);
END

CREATE TRIGGER audit_test_a_audit_update AFTER UPDATE ON audit_test_a 
FOR EACH ROW

BEGIN
UPDATE audit_test_a
set last_changed = strftime('%Y-%m-%d %H:%M:%f', 'now', 
'localtime')

WHERE rec_id = old.rec_id;
INSERT INTO change_history VALUES ();
END

Thanks for your help



You could try:

  CREATE TRIGGER audit_test_a_audit_update
  AFTER UPDATE OF (all-columns-except-last_changed) ON ...


Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite Instead of Trigger on Views does not always fire..

2014-04-02 Thread Dan Kennedy

On 04/02/2014 06:08 PM, Engin Guelen wrote:

Hello,
  
i noticed that Instead of Update Triggers on Views do not fire as soon as the Views get a bit more involved.


The Following View and Trigger Code compiles w/o Errors.
  
But when changing the SYNC column nothing happens to REF_TAB, that means the trigger does not fire.

A drastically simpler View Definition in contrast does what i expect in a 
similar context.
  
Does anybody have an idea what might be the issue?


Does the WHERE clause on the UPDATE statement match any rows returned by 
the view? And does the UPDATE actually modify column SYNC?







  
Regards,

Engin

-

CREATE VIEW X_DEV2REF
AS
SELECT 'x' SYNC,C.ENV ENV,'' TV,C.SCHEMA SCHEMA,C.TAB TAB,'' COL,'' PF,'' U,'' 
NN,'' INC,'' CACHE,'' TYPE,'' LENGTH,'' SCALE,IFNULL(S.ALTERED,R.CREATED) 
ALTERED
FROM
(
SELECT ENV,SCHEMA,TAB
FROM
(
   SELECT 'DEV' ENV,SCHEMA,TAB
   FROM ENV_TAB
   WHERE ENV = 'DEV'
   EXCEPT
   SELECT 'DEV' ENV,SCHEMA,TAB
   FROM   REF_TAB
)
UNION ALL
SELECT ENV,SCHEMA,TAB
FROM
(
   SELECT 'REF' ENV,SCHEMA,TAB
   FROM   REF_TAB
   EXCEPT
   SELECT 'REF' ENV,SCHEMA,TAB
   FROM ENV_TAB
   WHERE ENV = 'DEV'
)
) C
LEFT JOIN ENV_TAB S ON (S.ENV=C.ENV AND S.TAB=C.TAB)
LEFT JOIN REF_TAB R ON (C.ENV='REF' AND R.TAB=C.TAB)
union
SELECT 'x' 
SYNC,C.ENV,C.TV,C.SCHEMA,C.TAB,C.COL,C.PF,C.U,C.NN,C.INC,C.CACHE,C.TYPE,C.LENGTH,C.SCALE,T.ALTERED
FROM
(
SELECT ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
FROM
(
   SELECT 'DEV' ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
   FROM ENV_COL
   WHERE ENV = 'DEV'
   EXCEPT
   SELECT 'SIT' ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
   FROM REF_COL
)
WHERE TAB NOT IN (SELECT TAB FROM X_DEV2SIT_TAB)
UNION ALL
SELECT ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
FROM
(
   SELECT 'REF' ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
   FROM REF_COL
   EXCEPT
   SELECT 'DEV' ENV,TV,SCHEMA,TAB,PF,U,COL,NN,INC,CACHE,TYPE,LENGTH,SCALE
   FROM ENV_COL
   WHERE ENV = 'DEV'
)
WHERE SCHEMA||'.'||TAB NOT IN
(
   SELECT DISTINCT SCHEMA||'.'||TAB
   FROM
   (
  SELECT ENV,SCHEMA,TAB
  FROM
  (
SELECT 'DEV' ENV,SCHEMA,TAB
FROM ENV_TAB
WHERE ENV = 'DEV'
EXCEPT
SELECT 'DEV' ENV,SCHEMA,TAB
FROM   REF_TAB
  )
  UNION ALL
  SELECT ENV,SCHEMA,TAB
  FROM
  (
SELECT 'REF' ENV,SCHEMA,TAB
FROM   REF_TAB
EXCEPT
SELECT 'REF' ENV,SCHEMA,TAB
FROM ENV_TAB
WHERE ENV = 'DEV'
  )
   )
)
) C
JOIN ENV_TAB T ON (T.ENV=C.ENV AND C.TAB=T.TAB)
ORDER BY TV,TAB,COL,ENV;
  
CREATE TRIGGER TR_X_DEV2REF_01  INSTEAD OF UPDATE of SYNC ON X_DEV2REF

BEGIN
 delete from REF_TAB where 1=1; -- Just to Check if the Trigger fired..
END;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using a customized collate function to mimic integer array type.

2014-03-21 Thread Dan Kennedy

On 03/21/2014 10:33 PM, Ben Peng wrote:

Hi, Tristan,

Your solution definitely works (we have defined a few custom functions) but
our application hides databases from users but allows users to use simple
conditions to retrieve results. To use this function, we would have to

1. teach users use this function, which is hard to do because it is
application specific and they need to know which fields need to use this
function.

2. translate user input to use this function internally. We will need to
use a separate table to record the fields that need translation, and change
user input accordingly.

A COLLATE function seems to mark the columns directly and save us from the
trouble of translating user input, so it might be a better solution in this
case.


SQLite assumes that collation sequences are internally consistent. From 
the docs:


 1. If A==B then B==A.
 2. If A==B and B==C then A==C.
 3. If AA.
 4. If ABut this would not be true of the proposed collation sequence. So 
although some queries will appear to work, you'll be in trouble if 
SQLite ever decides to create an automatic index based on the collation 
sequence. Or if anybody ever uses an ORDER BY or GROUP BY clause that 
uses it. Various other problems can likely arise if the optimizer starts 
commuting expressions and so on, which it sometimes does.


Of course I don't know exactly what kinds of queries you are expecting, 
but this seems like the kind of thing that will come back and bite you 
to me.


Dan.



On Fri, 2014-03-21 at 09:35 -0500, Ben Peng wrote:

Dear sqlite experts,

I'm far from an "sqlite expert", others should be able to
provide a more authoritive answer

I think what you want is rather to simply define your own custom
function to implement a custom match.

I think using COLLATE is wrong in this case, because the use case of
COLLATE is generally to implement ORDER BY, or perhaps to build a custom
index (to replace the regular strcmp() like behavior) but what you want
is a rather loose matching algorithm.

If you create a function using sqlite3_create_function(), similar to how
you might define a REGEX function, you might execute a query with:

   SELECT * FROM table WHERE custom_match (table.column, 42) = 0;

And then, lets assume that "custom_match" uses sqlite3_result_int()
to set the result to 0 if there is a match, or -1 if there is no match,
then you should get a match for any row where table.column = 158;42;76

You can of course give more parameters to "custom_match", so that you
can use invocations like this:
   custom_match (GREATER, table.column, 42)
   custom_match (EQUALS, table.column, 42)
   custom_match (LESSER, table.column, 42)

Make sense ?

Cheers,
 -Tristan


I have an application where data stored in columns can be lists of

integers

(e.g. 158;42;76). I cannot really split such record into multiple records
(one for 158, one for 42 etc) and I am currently storing them as VARCHAR
because they represent a complete piece of information. However, when I
select records, I would like to compare or match any of the values. For
example, the record could be selected by either

C > 100 (match 158)
C < 100 (match 42 and 76)
C = 42 (match 42)

 From what I have read so far, I could define a special collate function

for

these columns (sqlite3_create_collation etc), but before I jump into the
details, does anyone know

1. Is this the correct way to proceed?
2. Has anyone done anything similar so that I do not have to reinvent the
wheel?

Thank you very much,
Bo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using ICU case folding support

2014-03-19 Thread Dan Kennedy

On 03/19/2014 09:44 PM, Aleksey Tulinov wrote:


I've created test database:

sqlite> CREATE TABLE test (x COLLATE NOCASE);
sqlite> INSERT INTO test VALUES ('s');
sqlite> INSERT INTO test VALUES ('S');
sqlite> INSERT INTO test VALUES ('ё'); -- Russian e with diacritic
sqlite> INSERT INTO test VALUES ('Ё'); -- Russian E with diacritic

Then created index in ICU-disabled SQLite version:

sqlite> SELECT 'ё' LIKE 'Ё';
0
sqlite> .schema
CREATE TABLE test (x COLLATE NOCASE);
sqlite> CREATE INDEX idx_x ON test (x);

Then tried it in ICU-enabled SQLite version:


ICU-enabled or nunicode-enabled?

ICU does not modify the behaviour of existing collation sequences. So 
there is no problem there (apart from the original problem - that the 
ICU extension does not provide anything that can be used to create a 
case-independent collation sequence).


An index is a sorted list. And queries like this:


sqlite> SELECT * FROM test WHERE x = 'ё';


do a binary search of that list to find keys equal to 'ё'. But to do a 
binary search of an ordered list, you need to be using a comparison 
function compatible with that used to sort the list in the first place. 
Say I have the following list, sorted using a unicode aware NOCASE 
collation:


  (Ä, ä, Ë, ë, f)

Also assume that all characters in the list have umlauts adorning them.

Then I open the db using regular SQLite and try searching for "ä". 
Obviously the binary search fails - the first comparison compares the 
seek key "ä" with "Ë", incorrectly concludes that the key "ä" is larger 
than "Ë" and goes on to search the right-hand side of the index. The 
search fails.


Then say this search is part of a delete operation to remove a row from 
the database. The table row itself might be removed correctly, but the 
corresponding index key is not - because a search fails to find it. At 
that point you have an inconsistent table and index. A corrupt database.


In the future, we might have a similar problem in FTS. FTS offers a 
home-grown tokenizer named "unicode61" that folds case in the same 
unicode-aware way as nunicode. If the unicode standard changes to define 
more pairs of case equivalent characters, we will not be able simply 
upgrade "unicode61". For the same reasons - modifying the comparison 
function creates an incompatible system. Instead, we would name it 
"unicode62" or similar, to be sure that databases created using the old 
version continue to use it.


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using ICU case folding support

2014-03-19 Thread Dan Kennedy

On 03/19/2014 07:55 PM, Aleksey Tulinov wrote:

On 03/18/2014 10:02 PM, Alex Loukissas wrote:

Alex,

I suppose I can declare the column as BINARY and use LOWER( ) in my 
select
statements. Browsing through the code though, I do see uses of 
u_foldCase

in certain places, which leads me to believe that what I want may be
already there. I'll try to unit test this.



Correct me if i'm wrong, but i think i reproduced bug you've described 
in SQLite with ICU extension compiled in:


sqlite> CREATE TABLE test (x COLLATE NOCASE);
sqlite> INSERT INTO test VALUES ('s');
sqlite> INSERT INTO test VALUES ('S');
sqlite> INSERT INTO test VALUES ('ё'); -- Russian e with diacritic
sqlite> INSERT INTO test VALUES ('Ё'); -- Russian E with diacritic
sqlite> SELECT * FROM test WHERE x = 'ё';
ё
sqlite> SELECT * FROM test WHERE x = 's';
s
S

This might be a point of interest for SQLite ICU extension developers.

On the other hand, SQLite nunicode extension, which i developed, do that:

sqlite> SELECT * FROM test WHERE x = 'ё';
ё
sqlite> .load ./libnusqlite3.so
sqlite> SELECT * FROM test WHERE x = 'ё';
ё
Ё


One problem with this sort of thing (overriding built-in collation 
sequences) is that you need to be careful never to accidentally use the 
wrong version.


If you were to create an index using collation sequence "NOCASE" and 
populate the db using a stock SQLite, then query using a nunicode 
enhanced version, the results would be unpredictable. And running 
"PRAGMA integrity_check" would report corruption. If you were to write 
to the db using both a regular SQLite and a nunicode version the 
database would become corrupt in the sense that no version would be able 
to query it reliably.


For that reason, the SQLite ICU extension does not automatically 
override the "NOCASE" collation sequence the same way it overrides the 
built-in upper() and lower() functions. If we were to add case-folding 
to ICU collations, we would have to use a different name.


It does look like nunicode would solve the OP's problem nicely though.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Opening and closing an SQLITE3 db in PHP

2014-03-19 Thread Dan Kennedy

On 03/19/2014 06:49 AM, Tim Streater wrote:

Part of my app will, at user request, read some data from an SQLite db and also 
some files from disk, and send it all out on the network. This may in some 
cases take several minutes, at the end of which the db gets updated here and 
there. While this is happening, the user may wish to do another action that 
will require read/write access to this db. Typically this will be another 
process.

As it happens I'd not tried exactly the above until yesterday, at which point I got 
"database locked" after my timeout period expired. Although I'm careful to 
finalize() and close() as appropriate, the way things are structured at present the 
database is left open while the slow processing (writing to the network) takes place.

I can rejig this without too much trouble, but for various reasons it's not just a simple 
case of "closing the db earlier", and so I'd like to be sure I understand how 
things work before starting. I'm using PHP's sqlite3 class rather than PDO, and I've done 
nothing explicit about the journal mode.

My questions are:

1) I start with $dbh = new sqlite3 ($dbname); Am I right in thinking that this 
does not explicitly open the db (and hence acquire a lock), but that the db is 
opened and the default lock (SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE) is 
acquired only when the first query is done?


Calling sqlite3_open() does open the database file. But it does not read 
any data from the db (not even enough to check that the file really is a 
database file). So no lock is required.


Locks are only held while transactions are running. A transaction may be 
started and ended explicitly using "BEGIN" and "COMMIT" or "ROLLBACK". 
Or, if you don't start one explicitly, SQLite automatically opens and 
closes a separate transaction for each statement executed.





2) If I want to use a different flag such as SQLITE_OPEN_READONLY do I need to do an 
explicit open such as $dbh->open ($dbname, SQLITE_OPEN_READONLY); or is there a way to 
specify the flag(s) with "new sqlite3" ?


I don't think you need to do this.




3) Is the lock always released if I do $dbh->close(); ?


Yes, assuming the sqlite3_close() call succeeds. But all locks are also 
released when the current transaction is closed, which might be more 
convenient that closing the entire database.





4) Would it just be simpler to switch to WAL mode?


There's a good chance it will be, yes.

Dan.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using ICU case folding support

2014-03-18 Thread Dan Kennedy

On 03/19/2014 01:22 AM, Alex Loukissas wrote:

Hello,

I'm trying to use the ICU extension and it looks like what I want to do is
not very clear from the documentation. I would like to switch from my
current usage, where my column declaration is as follows:

CREATE TABLE demo("name text not null COLLATE NOCASE");

As noted in the documentation, this is not Unicode-aware, which is why I
turned to the ICU extension. What should be the equivalent usage where
sqlite does case folding using ICU?


I don't think there is a way to do that with the current SQLite ICU 
extension.


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Error "Database or disk if full" shown when indexing for 2 billion records.

2014-03-17 Thread Dan Kennedy

On 03/17/2014 08:32 PM, Yi Wang wrote:

I inserted 2 billion records with only 1 column with name of "str_md5", the
value is the MD5 encrypted value of "mm"+rowid(such like MD5(mm121212...).

I didn't not create primary key for the only column b/c i am not sure it
would slow the speed of insert command. The whole insert of 2 billion
records took me over 18hours.

But when I hope to index the column with the sql command of

”create index tableMD5_idx on tableMD5(on str_md5);"

The index command ran for around 1 hour and then the error "Database or
disk if full" shown up.

FYI: The sqlite file only contains that only 1 table with 1 column. The
size of the DB file took around 87G disk space.

And the disp space is still 50G more free space to reach full. So I am not
sure whether it's the space problom b/c according to my previous
experience, the index command would increase around 1/3 size of the currnet
DB file (I took a test DB with one hundres million records which took abour
7G space, and after the index command the final size reach to 9G around.)

So any ideas? Thanks in advance.


In order to create a large index, as well as the N bytes of space 
required to store the index itself in the database, SQLite requires 
roughly 2N bytes of free space wherever temp files are created on your 
system to work with. It uses this space as temporary storage to sort the 
index entries.


Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very slow fdsync() calls

2014-03-11 Thread Dan Kennedy

On 03/11/2014 07:01 AM, Jono Poff wrote:

Hi,

I have an application that uses sqlite3.

Investigating a problem with the app stalling occasionally I found 
that (every hour or two) an fdsync() system call from sqlite3_step() 
was taking over 3 seconds to return.


On closer investigation, the file descriptor in these calls point to 
the directory in which the database resides, rather than the database 
of journal file itself, and are opened with O_LARGEFILE flag and no 
O_RDWR and transfers zero bytes.


Presumably this is updating metadata, but I understood fdsync() 
doesn't touch metadata.


Can anybody suggest a way to prevent sqlite making these fdsync() 
calls?  The database updates aren't high volume (maybe 20/sec max) but 
the app needs to be responsive.  I traced mmap() calls in my process 
and found none relating  to the sqlite db.  I thought maybe I could 
force sqlite to not cache writes this way.  It seems that using pragma 
to disable caching  in sqlite simply exposes the database to 
corruption rather than forcing synchronous writing to disk.  Is this 
correct?


I'm using Solaris 10 with zfs and I may be able to disable write 
caching on the entire disk, but would prefer not to!  Any ideas 
appreciated.


It's probably syncing the directory to make sure that the entry
corresponding to a journal file has made it to disk before it
begins updating the database file. Otherwise, if a power failure
were to occur, following recovery SQLite might find that although
the contents of the journal file have been synced to disk, they
cannot be located as the directory entry has been lost. Leading
to corruption.

Using "PRAGMA journal_mode=WAL" might prevent SQLite from syncing
the directory as often.

Or "PRAGMA locking_mode=EXCLUSIVE; PRAGMA journal_mode=PERSIST;",
if there will only ever be a single connection to the database.

Or you could build with SQLITE_DISABLE_DIRSYNC, which omits all
syncs on directories. Of course, that will slightly increase the
chances of a power failure causing database corruption.

Dan.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: Exception when querying a range index

2014-03-11 Thread Dan Kennedy

On 03/11/2014 04:17 AM, St. B. wrote:

Hi,


On Mon, Mar 10, 2014 at 6:37 PM, Dan Kennedy <danielk1...@gmail.com> wrote:


On 03/10/2014 06:49 PM, St. B. wrote:


Hi,

I posted this question last year, but did not get any feed back. Is this
something that can be handled on the ML, or should I ask in another place?

Regards,

-- Forwarded message --
From: St. B. <sbart...@gmail.com>
Date: Sun, Jan 6, 2013 at 11:45 PM
Subject: Exception when querying a range index
To: sqlite-users@sqlite.org


Hi All,

I have an application using sqlite to store coast lines (application runs
on windows and uses the .Net version of sqlite v1.0.83). I use range
indexes to look-up coast line within range of a given point.

During run time I find that the following exception occurs.

vtable constructor failed: MapLevel_Idx3


How is virtual table MapLevel_Idx3 created? Is it an rtree table?


The virtual table  is created with the following sql statement:

CREATE VIRTUAL TABLE MapLevel_Idx4 USING rtree(
id,  -- Integer primary key
minX, maxX,  -- Minimum and maximum X coordinate
minY, maxY   -- Minimum and maximum Y coordinate
);

Yes it is an rtree table


SQLite version?


I am using  system.data.SQLite v 1.0.84.0. I guess it includes sqlites in v
3.7.6.3 (but am not sure at all)


If it's possible, you should upgrade to a newer version of SDS. 3.7.6.3
is almost three years old now.

The "vtable constructor failed" message tells us that an error occurred
while trying to initialize the virtual table instance, but it doesn't
tell us any more than that. However, the error messages returned by the
rtree module have improved a lot since 3.7.6, so I fancy that if you
upgrade the new version will spit out a more specific complaint.

For example, I just tested the effects of encountering an SQLITE_BUSY
while initializing the rtree instance. With 3.8.4 the error message is
"database is locked". But with 3.7.6 all you get is "vtable constructor
failed".

There is a good chance that this is what is happening in your environment
as well - the db is locked and the error message is deceptive. In which
case you should just retry the operation or add a busy-handler to the
database connection. But upgrading first is the safer option.

Dan.







___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: Exception when querying a range index

2014-03-10 Thread Dan Kennedy

On 03/10/2014 06:49 PM, St. B. wrote:

Hi,

I posted this question last year, but did not get any feed back. Is this
something that can be handled on the ML, or should I ask in another place?

Regards,

-- Forwarded message --
From: St. B. <sbart...@gmail.com>
Date: Sun, Jan 6, 2013 at 11:45 PM
Subject: Exception when querying a range index
To: sqlite-users@sqlite.org


Hi All,

I have an application using sqlite to store coast lines (application runs
on windows and uses the .Net version of sqlite v1.0.83). I use range
indexes to look-up coast line within range of a given point.

During run time I find that the following exception occurs.

vtable constructor failed: MapLevel_Idx3


How is virtual table MapLevel_Idx3 created? Is it an rtree table?

SQLite version?

Dan.







Occurence ratio is about 1 in 1 queries on the range index. Am I doing
something wrong, or is this supposed to be so? When the exception occurs, I
just drop the resultset and assume that no data was to be found. Should I
restart the query?

Regards,

--

S. Barthes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Out of memory error for SELECT char();

2014-03-08 Thread Dan Kennedy

On 03/09/2014 01:25 AM, Zsbán Ambrus wrote:

In the sqlite3 console, the following very simple statement gives
"Error: out of memory":

SELECT char();

I think this is a bug.


It is. Thanks for the report. Now fixed here:

  http://www.sqlite.org/src/info/ba39df9d4f

Dan.




   This query should need very little memory, so
it should not give such an error.  I believe it should return a single
row with a single value of an empty string.

I've reproduced this in both a few days old preview
sqlite-amalgamation-201403051440 and the stable sqlite3 3.8.1.

Ambrus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL quine using with

2014-03-08 Thread Dan Kennedy

On 03/08/2014 08:53 PM, Kees Nuyt wrote:

Someone called zzo38 posted a quine (self-replicating program)
on Internet Relay Chat in network: Freenode, channel: #sqlite

[2014-03-08 11:01:59] < zzo38> I made a quine program in SQL.
[2014-03-08 11:02:10] < zzo38>

with q(q) as
(select 'with q(q) as (select ''#'') select
replace(q,x''23'',replace(,)) from q;')
select replace(q,x'23',replace(q,,'')) from q;

[2014-03-08 11:02:52] < zzo38> Do you like quine program in SQL?
[2014-03-08 11:03:06] < zzo38> Maybe do you have a better (shorter) one?

Note:
SQL preferably written as a oneliner
  
References:

http://en.wikipedia.org/wiki/Quine_(computing)
http://sqlite.org/lang_with.html

Enjoy!




SELECT REPLACE(q, 8-8, quote(q)) FROM (SELECT 'SELECT REPLACE(q, 8-8, 
quote(q)) FROM (SELECT 0 AS q);' AS q);




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Weird out of memory problem a prepare

2014-03-05 Thread Dan Kennedy

On 03/05/2014 03:05 AM, Eduardo Morras wrote:

On Tue, 4 Mar 2014 15:19:24 +
Simon Slavin <slav...@bigfraud.org> wrote:


On 4 Mar 2014, at 3:15pm, Simon Slavin <slav...@bigfraud.org> wrote:


On 4 Mar 2014, at 3:09pm, Eduardo Morras <emorr...@yahoo.es> wrote:


zSql= "SELECT r.name, s.content FROM resource AS r, static AS s
WHERE (r.ids = s.ids AND r.name = ?);";

[snip]

Error on query: out of memory

I think this might require comparing every row in resource with
every row in static.  Which is a lot of temporary data to hold in
memory.

You might try something like

SELECT r.name, s.content FROM resource AS r JOIN static AS s ON
s.ids = r.ids  WHERE r.name = ?

and, of course, an index

CREATE INDEX i1 on resource (name,ids)

will make it run extremely quickly.

I'll reply both answers here, if you don't mind.

Thanks Simon, for the answers.

The tables have 4 rows each one, that's why I got suprised with the Out of 
Memory error. The biggest row has 12KB and with the join I do, shouldn't use 
more than 200KB.

Changing the ',' with the join you propose, gives Out of Memory too. It happens 
on prepare phase, before binding the ? with my data. The query didn't reach the 
step call.

Trying simple "SELECT r.name FROM resource AS r WHERE r.name = ?" gets "Out of 
memory" too calling preparev2.


Does the "out of memory" error come from sqlite3_prepare_v2()
directly or from sqlite3_errmsg()? Is the sqlite3* pointer
a NULL?

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unnecessary implicit conversion (may lead to a bug)

2014-02-24 Thread Dan Kennedy

On 02/24/2014 05:54 AM, skywind mailing lists wrote:

In

afpUnlock(sqlite3_file *, int)

the sharedLockByte is defined as an int (int sharedLockByte = 
SHARED_FIRST+pInode->sharedByte;) although all other related variables and the 
following function parameters are defined as unsigned long longs. At least 
theoretically this can lead to information loss and wrong sharedLockBytes values. 
If this is also practically the case I do not know.


I think it's Ok. (pInode->sharedByte) is always between 0 and 
SHARED_SIZE, which is defined as 510. And SHARED_FIRST is (2^30)+2. As 
you can't change either of these without creating either an incompatible 
file-format or a broken build, it's probably Ok. Does look a bit odd though.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Troubleshoot Disk I/O Error

2014-02-20 Thread Dan Kennedy

On 02/20/2014 09:29 PM, Akintoye Olorode (BLOOMBERG/ 731 LEXIN) wrote:

Our client  got the following print-out. First hint of failure is the line "TRUNCATE 
48 32768 failed". Any insights into what the root cause might be ?


The operation that is failing is (probaby) an attempt to
use SetEndOfFile() to extend the size of the *-shm file to
32KB (from 0KB). I have no idea why it is failing.

What version of SQLite is this? Newer versions provide more
detail for debugging, particularly if you also turn on
sqlite3_log() logging:

  http://sqlite.org/c3ref/log.html

Dan.




OPEN 40 D:\blp\wintrv\smartclient\applications\appinfo.db 0xc000 ok
READ 40 lock=0
LOCK 40 1 was 0(0)
READ 40 lock=1
OPEN 44 D:\blp\wintrv\smartclient\applications\appinfo.db-wal 0xc000 ok
OPEN 48 D:\blp\wintrv\smartclient\applications\appinfo.db-shm 0xc000 ok
SHM-LOCK 48 ok LockFileEx 0x
TRUNCATE 48 0
TRUNCATE 48 0 ok
SHM-LOCK 48 ok UnlockFileEx 0x
SHM-LOCK 48 ok LockFileEx 0x
SHM-LOCK 48 ok LockFileEx 0x
SHM-LOCK shmid-0, pid-7308 got 000,001 ok
TRUNCATE 48 32768
TRUNCATE 48 32768 failed
SHM-LOCK 48 ok UnlockFileEx 0x
SHM-LOCK shmid-0, pid-7308 got 000,000 ok
Error occurred-> SQLITE_IOERR[10]: disk I/O error
LOCK 40 4 was 1(0)
unreadlock = 1
TRUNCATE 48 32768
TRUNCATE 48 32768 failed
CLOSE 48
CLOSE 48 ok
CLOSE 44
CLOSE 44 ok
UNLOCK 40 to 0 was 4(0)
CLOSE 40
CLOSE 40 ok
OPEN 44 D:\blp\wintrv\smartclient\applications\appinfo.backup.db 0xc000 ok
READ 44 lock=0
LOCK 44 1 was 0(0)
DELETE "D:\blp\wintrv\smartclient\applications\appinfo.backup.db-wal" failed
UNLOCK 44 to 0 was 1(0)
LOCK 44 1 was 0(0)
DELETE "D:\blp\wintrv\smartclient\applications\appinfo.backup.db-wal" failed
UNLOCK 44 to 0 was 1(0)
UNLOCK 44 to 0 was 0(0)
CLOSE 44
CLOSE 44 ok

Thanks,

Akintoye


- Original Message -
From: Akintoye Olorode (BLOOMBERG/ 731 LEXIN)
To: sqlite-users@sqlite.org
At: Jan 30 2014 10:52:17


Thanks, will try this suggestions and post results.

- Akintoye

- Original Message -
From: sqlite-users@sqlite.org
To: sqlite-users@sqlite.org
At: Jan 30 2014 10:44:45


On 01/30/2014 10:19 PM, Akintoye Olorode (BLOOMBERG/ 731 LEXIN) wrote:

Hi,

We have one client that recently upgraded from WinXP SP3 to Win7 sp1 (Japanese 
Language). With new OS, attempts to access read from our sqlite database fails 
with disk I/O error.

Can anyone help with  suggestions on how to troubleshoot disk I/O error ? 
Adding call to sqlite3_config(SQLITE_CONFIG_LOG, ...) produces no additional 
information.

We have not been able to reproduce the error in-house.The user has HP ENVY 
laptop with hybrid drive. User has tried :

1. Install latest file system drivers
2. reformat the hard-drive & reinstall Win7 OS


all to no avail. Client does not report problems with any other applications.

Immediately after the error occurs, what value is returned by
the sqlite3_extended_errcode() function? Often the extended error
code identifies the specific IO operation that failed, which can
shed light on the problem.

Another thing to do is to try compiling with the following defines:

-DSQLITE_DEBUG=1 -DSQLITE_DEBUG_OS_TRACE=1

This causes the Windows VFS module to print various messages to
standard output that should help to figure out what is happening.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] help needed for major SQLite problem

2014-02-07 Thread Dan Kennedy

On 02/08/2014 03:00 AM, C M wrote:

This is a follow-up to a question I asked on this list on Sep 1st, 2013,
about an error that I was randomly getting with disk-based SQLite database
in a Python desktop application.  I now have more info to provide about the
error...such as what was asked for at that time:

On Sun, Sep 1, 2013 at 6:12 PM, Richard Hipp <d...@sqlite.org> wrote:


Does Python have an interface to the error and warning log mechanism of
SQLite?  (http://www.sqlite.org/errlog.html)  Can you turn that on?  It
will probably give more details about what it happening.


I wasn't able to do this at first, but thanks to switching from the
standard sqlite3 module in Python (otherwise known as pysqslite) to Roger
Binns's APSW module, and then also using an experimental module,
apswdbapi2, from Edzard Pasma (thank you both), I was able to set things up
to return the warning log...I think.  Today, after not seeing the error in
a very long time, I hit the error, and this was printed to sys.stdout:

SQLITE_LOG: delayed 1375ms for lock/sharing conflict (10) SQLITE_IOERR

SQLITE_LOG: os_win.c:35129: (5) winAccess(C:\Documents and Settings\user\My
Documents\My Dropbox\myapp\gorp.db-journal) - Access is denied. (3338)
SQLITE_IOERR

SQLITE_LOG: statement aborts at 16: [SELECT resumes, start FROM Durations
WHERE start='2014-02-07 14:24:14.064000' AND value='activity'] disk I/O
error (3338) SQLITE_IOERR

Does that give anyone a better idea of what could be happening and how I
can fix this problem?


Looks like GetFileAttributesEx() might be throwing an ERROR_ACCESS_DENIED
exception. Maybe a virus scanner or some other background process had
temporarily locked the database file.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table xRowid

2014-02-05 Thread Dan Kennedy

On 02/06/2014 01:22 AM, Kevin Martin wrote:

Hi,

My questions are basically:

1) What causes sqlite3 to call xRowid
2) If I don't want to update my virtual table, or do a select ROWID, can I just 
use an incremental counter, increased on every call to xNext (bearing in mind 
the order of my data is not guaranteed, so this won't necessarily return the 
same rowid for the same row on separate calls to xFilter), or even better just 
return SQLITE_ERROR?


One more case. If you have a query like this:

  SELECT ... FROM vtable WHERE vtable.a = ? OR vtable.b = ?;

and the implementation of vtable can handle queries for both
(vtable.a=?) and (vtable.b=?) efficiently, then SQLite might
make two separate sets of calls to xFilter/xStep - one to obtain
the set of rows for (a=?) and a second to obtain the set that
match (b=?). It then merges the two sets together using the
rowids as a unique key. If the rowids in the two sets are not
consistent, then you might end up returning duplicate rows to
the user or omitting rows from the result.

But is it the case that your virtual table doesn't handle
anything other than full scans of the entire table contents? If
so, it's probably not a problem if rowids are inconsistent.

Dan.







The problem itself is detailed below.

Thanks,
Kevin

---

I am creating virtual table that unpacks the contents of another table/view.

As a simple example, consider the following:

create table t(week, performance_csv)

Where each row contains a csv file for the performance of the given week, and 
each csv file has two columns, person and percentage.

I want to create a virtual table as follows:

create virtual table t2 using unpack_performance(t)

This table will declare the table as follows

create table t2(week, person, percentage)

and it will iterate over each row of each csv in the table t.

Implementing xRowid is easy because I can base it off the ROWID of t and number 
of lines in the csv file.

However, the problem comes when t is a view, I've only just discovered that 
views don't have ROWIDs!

I can quite happily just increment a simple counter, but the view isn't 
necessarily ordered, so the rowid won't match on multiple filters.

In my actual application the view is the union of a few selects, so actually 
including the rowid in the view is going to be a complete pain, and is not an 
option I want to consider.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Understanding transactions

2014-02-05 Thread Dan Kennedy

On 02/04/2014 10:12 PM, Igor Tandetnik wrote:

On 2/4/2014 5:23 AM, Yuriy Kaminskiy wrote:

How sqlite is supposed to behave when
*) there are read-only transaction;
*) there are update transaction on other connection;
*) cache space is exhausted by update transaction;
*) sqlite was not able to upgrade RESERVED lock to EXCLUSIVE due to 
presence of

SHARED lock, even after waiting for busy_timeout;
?


SQLite should return SQLITE_BUSY error for the statement that 
triggered the cache spill. From the documentation of sqlite3_step:


SQLITE_BUSY means that the database engine was unable to acquire the 
database locks it needs to do its job. If the statement is a COMMIT or 
occurs outside of an explicit transaction, then you can retry the 
statement. If the statement is not a COMMIT and occurs within an 
explicit transaction then you should rollback the transaction before 
continuing.


This second part is precisely for the case where a statement other 
than a COMMIT wants to acquire EXCLUSIVE lock due to cache overflow, 
and fails. If I recall previous discussions of this topic correctly, 
SQLite cannot guarantee that the in-memory cache is in a good state at 
this point, and so the only safe course of action is to discard it by 
rolling back.


It can either fail update statement (but I don't see that in test 
below: it
sleeps for busy_timeout, but *does not* return error), or keep on 
storing data
in memory [it *does not* write anything] (thus using over specified 
cache size?

[and eventually overflow virtual memory?]), or what?


It's supposed to return an error. There might be some flaw in your 
test; I myself don't speak Perl (and am too lazy to conduct a test of 
my own).


The docs are correct, but they describe a situation that can only occur 
with ancient versions of the library. So I guess they should be updated 
to be a little less alarming.


Long, long ago there existed versions of SQLite that might return 
SQLITE_BUSY in the middle of a transaction for exactly the reason Igor 
cites - SQLite would try to upgrade from a RESERVED to an EXCLUSIVE lock 
so that it could as to free up memory by writing dirty pages out to the 
database file. If it failed to obtain the lock it would return 
SQLITE_BUSY to the user. Sometimes leaving the pager cache in an 
inconsistent (corrupt) state. This is a bug in those early versions - if 
you're using one and you do get into this situation, the only safe thing 
to do is to roll back the transaction.


For version 3.4.0 and later, SQLite was changed so that it returns 
SQLITE_IOERR (extended error code SQLITE_IOERR_BLOCKED) in this 
situation. And an SQLITE_IOERR automatically rolls the transaction back. 
So, no chance of corruption.


  http://www.sqlite.org/src/info/ce2c9925d0

Then, for 3.6.2, this was changed again, so that if SQLite fails to 
obtain the lock, it just allocates more memory and continues the 
transaction. Eventually, when the memory allocation fails, SQLITE_NOMEM 
is returned to the caller. And since SQLITE_NOMEM also automatically 
rolls back the transaction, no chance of corruption in this case either.


  http://www.sqlite.org/src/info/cb494e10d7

Does that match everyone's results?

Dan.










___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] possible bug 3.8.1 /3.8.3

2014-02-03 Thread Dan Kennedy

On 02/04/2014 12:25 AM, Adam Devita wrote:

Good day,

I'm debugging some code that uses 3.8.1, and I've tried just upgrading to
3.8.3, which didn't work.  The observation is that

This query:
SELECT id, data_blob FROM data WHERE timestamp = (SELECT MIN(timestamp)
FROM data WHERE stream_num = ?) LIMIT 1

seems to occasionally produce a wrong result (the content of data_blob is
incorrect given the values of stream_num)

yet this query
SELECT id, data_blob FROM data WHERE stream_num = ? order by timestamp asc
LIMIT 1

seems just fine, insofar as the same tests on the same data have not hit
any of the error condition / contradiction.

in both cases sqlite3_bind_int(pStmt, 1, (int)nStream); is used for
parameter ?


We are using an in memory database as a smarter queue where timestamp data
gets inserted, and if the db size is sufficient (30 to 40 records) the
above query lets us pop the earliest timestamp (which is stored as int64
via sqlite3_bind_int64).

Is this a possible bug or am I missing something?  Using the backup api to
look at it from a file
sqlite>.schema
CREATE TABLE data ( id INTEGER PRIMARY KEY, timestamp BIGINT NOT NULL,
stream_num TINYINT, source_seq_num TINYINT,
event_seq_num INT, data_address BIGINT NOT NULL, data_blob BLOB NOT NULL);

sqlite> SELECT id, data_blob FROM data WHERE stream_num = 2 order by
timestamp asc  LIMIT 1;
4|☺
sqlite> SELECT id, data_blob FROM data WHERE timestamp = (SELECT
MIN(timestamp) FROM data WHERE stream_num = 2) LIMIT 1;
3|☻
sqlite>


More than one entry in the table that matches the "timestamp = (SELECT 
MIN ...)" condition? Does the id=3 row show up in the output if you 
remove the LIMIT clause from the second query?






___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fine tuning table indexes

2014-02-02 Thread Dan Kennedy

On 02/02/2014 11:16 PM, James K. Lowden wrote:

On Sat, 1 Feb 2014 11:21:45 -0500
Stephen Chrzanowski  wrote:


Would be interesting to see when and where that single index comes
into play when multiple indexes are defined.

create table T (t int primary key, a int , b int);
create index Ta on T(a);
create index Tb on T(b);

select * from T where a < 1 or b < 1

Using one index, pick your poison.  You could use Ta or Tb, but for the
other half of the OR you'll have to scan the table.  So you might as
well scan the table and forget about the index.

Using two indexes, you can find the matching values in Ta and Tb, and
produce results from the union of the two.

For 10 rows, the two-index algorithm is likely slower.  For a million
rows, depending on the cardinality of the data, it could be
significantly faster.



SQLite does that sort of thing already.

  sqlite> create table T (t int primary key, a int , b int);
  sqlite> create index Ta on T(a);
  sqlite> create index Tb on T(b);
  sqlite> EXPLAIN QUERY PLAN select * from T where a < 1 or b < 1;
  0|0|0|SEARCH TABLE T USING INDEX Ta (a


  http://www.sqlite.org/optoverview.html#or_opt





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] wal_autocheckpoint

2014-01-31 Thread Dan Kennedy

On 02/01/2014 12:13 AM, veeresh kumar wrote:

Hi All,
  
Below is my Sqlite settings. The current database size

is  ~ 8GB
  
PRAGMA journal_mode = WAL

PRAGMA synchronous = NORMAL
PRAGMA wal_autocheckpoint = 50"
  
With the above check point, the time taken to commit

transaction to actual database file after reaching a checkpoint is 2.5 to 3
minutes for a 8 GB database.Is it taking acceptable time taken as per sqlite
standards?


Are your pages 1024 bytes? If so, I guess the checkpoint is writing
500MB of (possibly) non-contiguous data to the database file. Somewhere
in the neighbourhood of 3MB/second. Seems quite plausible.

  
My concern is during this 2.5 – 3 minutes, Will application

be able to perform any “new write operation” Or does the application has to
wait for 2-3 minutes until merging of the data from WAL file to actual database
operation gets completed ?


New writes are possible while a checkpoint is ongoing.

Although, if you are using synchronous=FULL, they might be
really slow, depending on the file-system. synchronous=NORMAL
or OFF should be Ok. Something to test if it matters to
you.

Dan.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] too many SQL variables

2014-01-30 Thread Dan Kennedy

On 01/31/2014 01:01 AM, E. Timothy Uy wrote:

Just for my edification, what is the limit on the number of SQL parameters?
Today I hit "too may SQL variables" with about 1400...


999.

  http://www.sqlite.org/limits.html#max_variable_number

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Troubleshoot Disk I/O Error

2014-01-30 Thread Dan Kennedy

On 01/30/2014 10:19 PM, Akintoye Olorode (BLOOMBERG/ 731 LEXIN) wrote:

Hi,

We have one client that recently upgraded from WinXP SP3 to Win7 sp1 (Japanese 
Language). With new OS, attempts to access read from our sqlite database fails 
with disk I/O error.

Can anyone help with  suggestions on how to troubleshoot disk I/O error ? 
Adding call to sqlite3_config(SQLITE_CONFIG_LOG, ...) produces no additional 
information.

We have not been able to reproduce the error in-house.The user has HP ENVY 
laptop with hybrid drive. User has tried :

1. Install latest file system drivers
2. reformat the hard-drive & reinstall Win7 OS


all to no avail. Client does not report problems with any other applications.


Immediately after the error occurs, what value is returned by
the sqlite3_extended_errcode() function? Often the extended error
code identifies the specific IO operation that failed, which can
shed light on the problem.

Another thing to do is to try compiling with the following defines:

  -DSQLITE_DEBUG=1 -DSQLITE_DEBUG_OS_TRACE=1

This causes the Windows VFS module to print various messages to
standard output that should help to figure out what is happening.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Troubleshoot Disk I/O Error

2014-01-30 Thread Dan Kennedy

On 01/30/2014 10:19 PM, Akintoye Olorode (BLOOMBERG/ 731 LEXIN) wrote:

Hi,

We have one client that recently upgraded from WinXP SP3 to Win7 sp1 (Japanese 
Language). With new OS, attempts to access read from our sqlite database fails 
with disk I/O error.

Can anyone help with  suggestions on how to troubleshoot disk I/O error ? 
Adding call to sqlite3_config(SQLITE_CONFIG_LOG, ...) produces no additional 
information.

We have not been able to reproduce the error in-house.The user has HP ENVY 
laptop with hybrid drive. User has tried :

1. Install latest file system drivers
2. reformat the hard-drive & reinstall Win7 OS


all to no avail. Client does not report problems with any other applications.


Immediately after the error occurs, what is value is returned by
the sqlite3_extended_errcode() function? Often the extended error
code identifies the specific IO operation that failed, which can
shed light on the problem.

Another thing to do is to try compiling with the following defines:

  -DSQLITE_DEBUG=1 -DSQLITE_DEBUG_OS_TRACE=1

This causes the Windows VFS module to print various messages to
standard output that should help to figure out what is happening.

Dan.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Common Table Expression"

2014-01-24 Thread Dan Kennedy

On 01/25/2014 01:00 AM, big stone wrote:

AND NOT EXISTS (
  SELECT 1 FROM ok AS lp
  WHERE ind=lp.c and  z.z = substr(s, n, 1)
)


s/ok/goods/ and s/lp.c/lp.r/

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE 3.8.3 beta: SQLITE_OMIT_CTE build doesn't work

2014-01-20 Thread Dan Kennedy

On 01/20/2014 08:36 PM, Jan Nijtmans wrote:

$ gcc -c -Wall -o sqlite3.o -DSQLITE_OMIT_CTE sqlite3.c
sqlite3.c: In function ‘yy_reduce’:
sqlite3.c:117782:3: warning: implicit declaration of function
‘sqlite3WithAdd’ [-Wimplicit-function-declaration]
yygotominor.yy59 = sqlite3WithAdd(pParse, 0, [-5].minor.yy0,
yymsp[-4].minor.yy14, yymsp[-1].minor.yy3);
^
sqlite3.c:117782:20: warning: assignment makes pointer from integer
without a cast [enabled by default]
yygotominor.yy59 = sqlite3WithAdd(pParse, 0, [-5].minor.yy0,
yymsp[-4].minor.yy14, yymsp[-1].minor.yy3);
 ^
sqlite3.c:117787:20: warning: assignment makes pointer from integer
without a cast [enabled by default]
yygotominor.yy59 = sqlite3WithAdd(pParse, yymsp[-7].minor.yy59,
[-5].minor.yy0, yymsp[-4].minor.yy14, yymsp[-1].minor.yy3);
 ^



Looks like SQLITE_OMIT_CTE only works with full source builds, not
the amalgamation. As it causes code to be omitted from parse.y.

There are a few others this is true of as well - OMIT_VIRTUALTABLE,
OMIT_TRIGGER etc.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Solving the Sudoku with SQlite 3.8.3 trunk

2014-01-18 Thread Dan Kennedy


Let me try again. Some spaces were missing from the input
in the parent post:


/* The input suduko. */
WITH RECURSIVE input(sud) AS (
  VALUES(
'53..76..195986.8...6...34..8.3..17...2...6.628419..58..79'
  )
),

/* A table filled with digits 1..9, inclusive. */
digits(z, lp) AS (
  VALUES('1', 1)
  UNION ALL SELECT
  CAST(lp+1 AS TEXT), lp+1 FROM digits WHERE lp<9
),

/* The tricky bit. */
x(s, ind) AS (
  SELECT sud, instr(sud, '.') FROM input
  UNION ALL
  SELECT
  substr(s, 1, ind-1) || z || substr(s, ind+1),
  instr( substr(s, 1, ind-1) || z || substr(s, ind+1), '.' )
  FROM x, digits AS z
  WHERE ind>0
  AND NOT EXISTS (
SELECT 1 FROM digits AS lp
WHERE z.z = substr(s, ((ind-1)/9)*9 + lp, 1)
OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)
OR z.z = substr(s, (((ind-1)/3) % 3) * 3
  + ((ind-1)/27) * 27 + lp
  + ((lp-1) / 3) * 6
  , 1)
  )
)

SELECT s FROM x WHERE ind=0;


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Solving the Sudoku with SQlite 3.8.3 trunk

2014-01-18 Thread Dan Kennedy


Beaten by minutes!


/* The input suduko. */
WITH RECURSIVE input(sud) AS (
  VALUES(
'53  76  195986 8   6   34  8 3  17   2   6 6 28
419  58  79'

  )
),

/* A table filled with digits 1..9, inclusive. */
digits(z, lp) AS (
  VALUES('1', 1)
  UNION ALL SELECT
  CAST(lp+1 AS TEXT), lp+1 FROM digits WHERE lp<9
),

/* The tricky bit. */
x(s, ind) AS (
  SELECT sud, instr(sud, ' ') FROM input
  UNION ALL
  SELECT
substr(s, 1, ind-1) || z || substr(s, ind+1),
instr( substr(s, 1, ind-1) || z || substr(s, ind+1), ' ' )
  FROM x, digits AS z
  WHERE ind>0
  AND NOT EXISTS (
SELECT 1 FROM digits AS lp
  WHERE z.z = substr(s, ((ind-1)/9)*9 + lp, 1)
 OR z.z = substr(s, ((ind-1)%9) + (lp-1)*9 + 1, 1)
 OR z.z = substr(s, (((ind-1)/3) % 3) * 3
  + ((ind-1)/27) * 27 + lp
  + ((lp-1) / 3) * 6
, 1)
  )
)

SELECT s FROM x WHERE ind=0;



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


<    1   2   3   4   5   6   7   8   9   10   >