Re: [sqlite] speeding up an FTS5 search with a JOIN

2020-03-09 Thread Dan Kennedy


On 9/3/63 01:44, Puneet Kishor wrote:

Update: so, after much hitting of my head against the wall of sql, I came up 
with the following – as noted above, I really have two distinct set of queries 
I can do separately like so

Q1: (SELECT t1Id FROM t1 WHERE …) AS a

Q2: (SELECT t1Id FROM vt1 WHERE vt1 MATCH ‘bar’) AS b

Then, I can do the following -

SELECT Count(*) FROM a WHERE a.t1Id IN b

Of course, in reality, I don’t do this separately but all in one go to make a 
really messy SQL but a really fast query, a couple of hundred ms as opposed to 
> 25s



I don't see why that would be any different from your join query. What 
does EXPLAIN QUERY PLAN say for the version that takes a few hundred ms?


Dan.





You might notice that in my Q2 above I MATCHed for ‘bar’ instead of ‘foo’. That is 
because ‘bar’ returns fewer rows than ‘foo’ does. The problem remains when there 
are too many matches in the FTS query in which case the FTS query itself is slow, 
for example, with ‘foo’ which matches > 80K rows.

Now, one interesting point of comparison – the same kind of query (from the 
user point-of-view) against an ElasticSearch instance (that is, all the rows 
with ‘foo’ anywhere in the text) is very fast, in the order of sub-hundred ms. 
I realize it might be unfair comparing SQLite with ElasticSearch, but still. 
(Or, is it a fair comparison?)



On Mar 7, 2020, at 8:59 AM, P Kishor  wrote:

I asked this question on Stackoverflow with not much success, and a suggestion 
to ask it on the list. So here I am. I have two tables, t1(id, t1Id, … other 
cols …, fullText) and a FTS5 virtual table vt1(t1Id, fullText)

```
sqlite> EXPLAIN QUERY PLAN
   ...> SELECT Count(*) as num FROM t1 WHERE deleted = 0;

QUERY
PLAN
--SEARCH TABLE t1 USING COVERING INDEX ix_t1_t1Id (deleted=?)

sqlite> SELECT Count(*) as num FROM t1 WHERE deleted = 0;
308498

Run Time
: real 0.043 user 0.023668 sys 0.009005
```


As can be see above, the actual query takes ~43ms

```
sqlite> EXPLAIN QUERY PLAN
   ...> SELECT Count(*) as num FROM vt1 WHERE vt1 MATCH 'foo';

QUERY
PLAN
--SCAN TABLE vt1 VIRTUAL TABLE INDEX 131073:

sqlite> SELECT Count(*) as num FROM vt1 WHERE vt1 MATCH 'foo';
80789

Run Time
: real 0.047 user 0.008021 sys 0.009640
```

The actual query, in this case, takes ~47ms. So far so good. But the problem 
occurs when I join the two tables

```
sqlite> EXPLAIN QUERY PLAN
   ...> SELECT Count(*) as num
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0 AND vt1 MATCH 'foo';

QUERY
PLAN
|--SCAN TABLE vt1 VIRTUAL TABLE INDEX 0:m
--SEARCH TABLE t1 USING INDEX sqlite_autoindex_t1_1 (t1Id=?)

sqlite> SELECT Count(*) as num
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0 AND vt1 MATCH 'foo';
80789

Run Time
: real 26.218 user 1.396376 sys 5.413630
```

The answer is correct but the query takes more than 26 seconds! Of course, I 
would like to speed up this query by several orders of magnitude, but I would 
also like to understand why this join is causing the slowdown.

Now, the reason I have constructed a query like this is because users can add 
further constraints for the table t1. For example,

```
sqlite> SELECT Count(*) as num
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0 AND
   ...> WHERE t1.frob = ‘bar' AND
   ...> WHERE t1.nob = ‘baz' AND
   ...> vt1 MATCH 'foo’;
```

Also, in every operation, for every given constraint, two queries are 
performed, one that returns the count and the other that returns the actual 
columns. And, finally, only a subset of the results are returned using LIMIT 
and OFFSET but *after* a sort ORDER has been prescribed. So, in reality, the 
last constraint above would result in the following

```
sqlite> SELECT Count(*) as num
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0 AND
   ...> WHERE t1.frob = ‘bar' AND
   ...> WHERE t1.nob = ‘baz' AND
   ...> vt1 MATCH 'foo’;
20367

sqlite> SELECT t1.id, t1.t1Id, … other cols …,
   ...> snippet(vt1, 1, "", "", "", 50) AS context,
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0 AND
   ...> WHERE t1.frob = ‘bar' AND
   ...> WHERE t1.nob = ‘baz' AND
   ...> vt1 MATCH ‘foo’
   ...> ORDER BY 
   ...> LIMIT 30 OFFSET ;
```

When no t1 columns are prescribed in the constraint, the default count (shown 
above) and default cols are returned with the FTS search

```
sqlite> SELECT Count(*) as num
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0
   ...> vt1 MATCH 'foo’;
20367

sqlite> SELECT t1.id, t1.t1Id, … other cols …,
   ...> snippet(vt1, 1, "", "", "", 50) AS context,
   ...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
   ...> WHERE t1.deleted = 0
   ...> vt1 MATCH ‘foo’
   ...> ORDER BY 
   ...> LIMIT 30 OFFSET 0;
```

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up an FTS5 search with a JOIN

2020-03-07 Thread Dan Kennedy


On 7/3/63 14:58, P Kishor wrote:

I asked this question on Stackoverflow with not much success, and a suggestion 
to ask it on the list. So here I am. I have two tables, t1(id, t1Id, … other 
cols …, fullText) and a FTS5 virtual table vt1(t1Id, fullText)

```
sqlite> EXPLAIN QUERY PLAN
...> SELECT Count(*) as num FROM t1 WHERE deleted = 0;

QUERY
PLAN
--SEARCH TABLE t1 USING COVERING INDEX ix_t1_t1Id (deleted=?)

sqlite> SELECT Count(*) as num FROM t1 WHERE deleted = 0;
308498

Run Time
: real 0.043 user 0.023668 sys 0.009005
```


As can be see above, the actual query takes ~43ms

```
sqlite> EXPLAIN QUERY PLAN
...> SELECT Count(*) as num FROM vt1 WHERE vt1 MATCH 'foo';

QUERY
PLAN
--SCAN TABLE vt1 VIRTUAL TABLE INDEX 131073:

sqlite> SELECT Count(*) as num FROM vt1 WHERE vt1 MATCH 'foo';
80789

Run Time
: real 0.047 user 0.008021 sys 0.009640
```

The actual query, in this case, takes ~47ms. So far so good. But the problem 
occurs when I join the two tables

```
sqlite> EXPLAIN QUERY PLAN
...> SELECT Count(*) as num
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0 AND vt1 MATCH 'foo';

QUERY
PLAN
|--SCAN TABLE vt1 VIRTUAL TABLE INDEX 0:m
  --SEARCH TABLE t1 USING INDEX sqlite_autoindex_t1_1 (t1Id=?)

sqlite> SELECT Count(*) as num
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0 AND vt1 MATCH 'foo';
80789



That does seem slow. Are there many rows in table "t1" with t1.deleted 
set to something other than 0? What does:


  SELECT count(*) AS num FROM t1 JOIN vt1 ON t1.t1Id=vt1.t1Id WHERE vt1 
MATCH 'foo';


return?

Dan.






Run Time
: real 26.218 user 1.396376 sys 5.413630
```

The answer is correct but the query takes more than 26 seconds! Of course, I 
would like to speed up this query by several orders of magnitude, but I would 
also like to understand why this join is causing the slowdown.

Now, the reason I have constructed a query like this is because users can add 
further constraints for the table t1. For example,

```
sqlite> SELECT Count(*) as num
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0 AND
...> WHERE t1.frob = ‘bar' AND
...> WHERE t1.nob = ‘baz' AND
...> vt1 MATCH 'foo’;
```

Also, in every operation, for every given constraint, two queries are 
performed, one that returns the count and the other that returns the actual 
columns. And, finally, only a subset of the results are returned using LIMIT 
and OFFSET but *after* a sort ORDER has been prescribed. So, in reality, the 
last constraint above would result in the following

```
sqlite> SELECT Count(*) as num
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0 AND
...> WHERE t1.frob = ‘bar' AND
...> WHERE t1.nob = ‘baz' AND
...> vt1 MATCH 'foo’;
20367

sqlite> SELECT t1.id, t1.t1Id, … other cols …,
...> snippet(vt1, 1, "", "", "", 50) AS context,
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0 AND
...> WHERE t1.frob = ‘bar' AND
...> WHERE t1.nob = ‘baz' AND
...> vt1 MATCH ‘foo’
...> ORDER BY 
...> LIMIT 30 OFFSET ;
```

When no t1 columns are prescribed in the constraint, the default count (shown 
above) and default cols are returned with the FTS search

```
sqlite> SELECT Count(*) as num
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0
...> vt1 MATCH 'foo’;
20367

sqlite> SELECT t1.id, t1.t1Id, … other cols …,
...> snippet(vt1, 1, "", "", "", 50) AS context,
...> FROM t1 JOIN vt1 ON t1.t1Id = vt1.t1Id
...> WHERE t1.deleted = 0
...> vt1 MATCH ‘foo’
...> ORDER BY 
...> LIMIT 30 OFFSET 0;
```
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Status of LSM1 extension

2020-03-05 Thread Dan Kennedy


On 5/3/63 16:11, Dominique Devienne wrote:

Hi,

I'm interested in LSM1 [1] as an alternative to SQLite, since in a
particular use-case,
I'm using SQLite mostly as a key-value store, and write performance is
particularly important,
in addition to MVCC. Sounds like it could be an excellent fit here,
and the fact it comes from
the SQLite team is something I value.

That said, the only online doc for LSM1 ([2] and [3]) are from the
defunct SQLite4 web-site,
and the main blog post is starting to look dated [4]. I thus wonder
about the level of quality
and support on LSM1, and lack of doc for it in the main SQLite web-site.

In terms of practicality, there's also no amalgamation for LSM1. And the virtual
table over LSM1 data-files [5], something I was look for, does not
appear to be documented
anywhere. Notably whether using that vtable using the familiar SQLite
API is advisable
instead of using the different and unfamiliar LSM1-specific API.

I'm just looking for clarity and advice around LSM1, as well as
commitments regarding
its level of quality, testing, and support. And whether we can hope to
have more doc and
amalgamation deliverables in the future. It sounds like it's a really
nice piece of code, but
the fact there's very little noise and advertisement about it is
somewhat worrying.



It's very much a solution looking for a problem at this point. It's not, 
as far as I know, used in anything that is too widely deployed.


That said, I don't think it's too bad of an implementation. The 
automated tests are reasonably good - although of course not as good as 
SQLite's though. And the docs are stored in kind of a ridiculous place 
at the moment, but I think they're quite complete.


Not planning to develop this any further unless a big user emerges, 
which is not impossible. I do intend to fix any reported bugs though.


Dan.






Thanks, --DD

[1] https://www2.sqlite.org/src/dir?name=ext/lsm1
[2] https://sqlite.org/src4/doc/trunk/www/lsmusr.wiki
[3] https://sqlite.org/src4/doc/trunk/www/lsmapi.wiki
[4] https://charlesleifer.com/blog/lsm-key-value-storage-in-sqlite3/
[5] https://www2.sqlite.org/src/finfo?name=ext/lsm1/lsm_vtab.c
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Issues with sqlite3session_attach(conn, NULL) w/ duplicate table in temp

2020-02-27 Thread Dan Kennedy


On 27/2/63 05:04, Adam Levy wrote:

Hi all

When I have a database connection with a temp table of the same name
as a table in main, I am getting what I feel is unexpected behavior
from the session extension.

Most succinctly, I start a session on a connection on the main db with
a table with an INTEGER PRIMARY KEY, attach it to all tables by
passing NULL, make a change such as a single INSERT in aforementioned
table, and then record a changeset. Then I invert this changeset, and
apply that inverse to the same connection, to reverse the changes just
made.

Normally this works flawlessly. However, if a table by the same name
exists in the temp database on that connection, I run into odd
behavrior.

If the temp table has different rows than the main table, a very odd
changeset conflict occurs. The conflict type is SQLITE_CHANGESET_DATA
but when I examine the changeset iter the values for the columns, they
match the conflicting values. The inverse of an INSERT is a DELETE so
this conflict type indicates that one of the non-primary key column
values doesn't match, but in every examination of the database and the
changeset I have performed they do match. The primary key exists and
matches as well.

If the temp table has the same rows at the main table (what the
changeset expects), the changeset_apply does not raise a conflict, but
it takes no affect on the main table.


Thanks for reporting this. As far as I can figure, it's a bug in 
sqlite3changeset_apply(), not sqlite3session_attach(). Now fixed here:


  https://sqlite.org/src/info/f71a13d072398c9f

If you get the chance, please check if this fixes your test script and 
let us know if it does not.



This can all be avoided by being explicit about the database when
calling sqlite3session_attach.
For example passing "*" or "main.*" or "main." avoids the
issue.


I think that's just because sqlite3session_attach() doesn't understand 
any of those shorthands. If you pass "*", it searches for a table named 
"*", not a table that matches the glob pattern "*".


Thanks,

Dan.






But it occurs if NULL is passed. This is not obvious and
arguably not expected given that the documentation about the Session
extension makes it seem like it should be already scoped to a single
database on a connection. After all, sqlite3session_create accepts the
database as an argument. Also, sqlite3_changeset_apply appears to be
specific to the "main" database. So it shouldn't care about whats in
the temp database at all.

 From all of the docs I have read on the Session extension this seems
like misbehavior at the level of sqlite3session_attach being passed
NULL for the zTab argument. At the minimum perhaps a note in the
documentation of sqlite3session_attach would be appropriate.

Below is a very simple example that can reproduce this behavior. It
has a number of lines commented out that describe how the bug can be
avoided to caused.

Its written in Golang, but the library it uses is just a very thin
wrapper around the C interface. It is using the latest 3.31.1
amalgamation. I help maintain the SQLite library that it uses and can
confirm that everything being called here is a very thin wrapper
around the C interface. I could write a C example but it would take me
more work.

The library it calls to print the SQL of the changeset is a utility I
wrote that is just using a thin wrapper around the changeset_iter to
parse the changeset into human readable SQL. It of course is not aware
of database names so it doesn't print "main" but this SQL is just for
displaying to the user. The values returned by
sqlite3changeset_conflict are formatted into the comments of that
printed SQL.

The code can be more easily downloaded here:
https://github.com/AdamSLevy/sqlite-session-bug

// main.go

package main

import (
 "bytes"
 "fmt"
 "io"
 "log"

 "crawshaw.io/sqlite"
 "crawshaw.io/sqlite/sqlitex"
 "github.com/AdamSLevy/sqlitechangeset"
)

func run() error {
 conn, err := sqlite.OpenConn(":memory:", 0)
 if err != nil {
 return fmt.Errorf("sqlite open: %w", err)
 }
 defer conn.Close()

 fmt.Println("Creating tables...")
 err = sqlitex.ExecScript(conn, `
CREATE TABLE main.t(id INTEGER PRIMARY KEY, a,b,c);
CREATE TABLE temp.t(id INTEGER PRIMARY KEY, a,b,c) --- remove this
line or rename the table to avoid issue;
`)
 if err != nil {
 return err
 }
 fmt.Println("Starting session on main...")
 sess, err := conn.CreateSession("main")
 if err != nil {
 return err
 }
 defer sess.Delete()
 // An empty string will pass NULL to sqlite3session_attach and allow
 // the bug.
 var attach string
 // Any of these prevent the bug.
 //attach = "*"
 //attach = "main.*"
 //attach = "main.t"
 fmt.Printf("Attaching to %s ...\n", attach)
 if err := sess.Attach(attach); err != nil {
 return err
 }

 fmt.Println("Inserting into main.t ...")
 commit := 

Re: [sqlite] Trigger name missing

2020-02-26 Thread Dan Kennedy


On 26/2/63 16:31, Jean-Luc Hainaut wrote:

Hi all,

It seems that SQLite (version 31.1) accepts a trigger declaration in 
which the name is missing. When fired, this trigger doesn't crashes 
but exhibits a strange behaviour. In particular, while expression 
"new." in an "insert" trigger returns the correct value, the 
equivalent expression "select  from T where Id = new.Id" 
always returns null (column "Id" is the PK of table "T"). Similarly, 
"update T set  =   where Id = new.Id" 
(silently) fails.



A statement like the following creates a "BEFORE" trigger named "AFTER". 
Does that explain things?


  CREATE TRIGGER AFTER INSERT ON t1 BEGIN
    ...
  END;

I find I fall into this trap about once every 18 months...

Dan.





Not critical but annoying if you are, like me, absent minded when 
typing code!


Regards

Jean-Luc Hainaut

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] wrong number of arguments to function rank()

2020-02-06 Thread Dan Kennedy


On 6/2/63 19:00, Andrea Spacca wrote:

Hello,

I experienced seeing no results for search in matrix-syanpse backed by 
sqlite3 storage. Tracking down the their source code I identified the 
query: 
https://github.com/matrix-org/synapse/blob/v1.9.1/synapse/storage/data_stores/main/search.py#L424


I then run the query directly on a copy of the database getting the 
error in the subject. I run an integrity check on the fts and no error 
was reported.


To be sure I started from a new database, create the virtual table and 
inserted data to run the query on. Same result:
sqlite> CREATE VIRTUAL TABLE event_search_test USING fts4 ( event_id, 
room_id, sender, key, value );
sqlite> INSERT INTO event_search_test 
VALUES('test','test','','test','a matching string');

sqlite> SELECT * FROM event_search_test WHERE value MATCH "string";
test|test||test|a matching string
sqlite> SELECT quote(matchinfo(event_search_test)) as rank, room_id, 
event_id, value FROM event_search_test WHERE value MATCH "string";
X'01000500010001000100'|test|test|a 
matching string
sqlite> SELECT rank(matchinfo(event_search_test)) as rank, room_id, 
event_id, value FROM event_search_test WHERE value MATCH "string";

Error: wrong number of arguments to function rank()


Looks like there should be one argument after the "matchinfo" for each 
column in the table. So:


  SELECT rank(matchinfo(event_search_test), 1.0, 1.0) AS rank, room_id...

Dan.






I'm running sqlite3 3.27.2 on a Debian 10 machine.
I looked at 
https://github.com/sqlite/sqlite/blob/version-3.27.2/src/test_func.c#L828 
and I can see that the error is somehow related to the mismatching 
between the second unsinged 32bit integer in the matchinfo returned 
blob and the nVal in the rankfunc param (please, correct me if I'm 
wrong).


I don't know exactly which nVal value rankfunc receives, I can see 
that upon sqlite3_create_function call for the function it takes -1 as 
variable number of arguments.


I see that the nCol in my query is 5, so I expect nVal to have 
something like the value of the number of column in the fst table plus 
the hidden one: in this case the value would match, but this is 
clearly not the case in my test.


Similar test failed as well:
sqlite> CREATE VIRTUAL TABLE another_test USING fts4 (a_field, 
another_field);

sqlite> INSERT INTO another_test VALUES('test', 'test');
sqlite> SELECT * FROM another_test WHERE a_field MATCH "test";
test|test
sqlite> SELECT quote(matchinfo(another_test)) as rank, a_field FROM 
another_test WHERE a_field MATCH "test";

X'01000200010001000100'|test
sqlite> SELECT rank(matchinfo(another_test)) as rank, a_field FROM 
another_test WHERE a_field MATCH "test";

Error: wrong number of arguments to function rank()


I cannot find any bug related to this problem, so I guess is somehow 
related to my env.


Thanks,
Andrea Spacca


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table OR constraint bug

2020-01-29 Thread Dan Kennedy


On 29/1/63 20:09, Lalit Maganti wrote:

Hi folks,

Just wanted to check up on this to see if this issue is something which is
being tracked and if there was a potential fix in the works?


Thanks for the excellent bug report and minimal reproduction case. Now 
fixed here:


  https://www.sqlite.org/src/info/dcb4838757ca49cf

None of us saw your post last month. We think it must have been filtered 
as spam by gmail. Sorry about that.


Dan.







Thanks,
Lalit

On Thu, 2 Jan 2020 at 15:13, Lalit Maganti  wrote:


Hi all,

I believe that I have found a bug in the virtual table bytecode generation
when OR constraints are present and argvIndex is set in xBestIndex but the
application does not actually filter fully.

The problem seems to be in not setting/unsetting the non-null flag
correctly (SQLITE_JUMPIFNULL) on the instruction which does the comparison;
this leads to NULLs slipping through the filter.

To fully repro the problem, please find attached a small C file which
shows the unexpected behaviour. By adding an EXPLAIN in-front of the SELECT
query, you should be able to see the incorrect bytecode on the NE opcode.

If the attachment doesn't come through for some reason, I've also put the
same file as a GitHub gist
.

Thank you!

Regards,
Lalit


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementing xLock/xUnlock VFS methods...

2020-01-08 Thread Dan Kennedy


On 8/1/63 22:41, J Decker wrote:

On Wed, Jan 8, 2020 at 7:10 AM Dan Kennedy  wrote:


On 8/1/63 20:29, J Decker wrote:

The documentation isn't very clear on what the intent of an xUnlock(
SQLITE_LOCK_NONE ) is intended to do.  Is it unlock everything? Is it the
same as remove a shared lock?

That's right. xUnlock(fd, SQLITE_LOCK_NONE) should completely unlock the
file. xUnlock(fd, SQLITE_LOCK_SHARED) should fall back to a SHARED lock
from RESERVED/PENDING/EXCLUSIVE.

Thank you... the second thing there confuses me...

xLock(SHARED)  xLock(SHARED) xLock(PENDING)
xUnlock( SHARED) - clear pending, and add a shared for total of 3?  or is
xUnlock always remove (this file handle's) locks?


You don't have to count SHARED locks. A connection is either in SHARED 
lock state or it is not. For your sequence:


  xLock(SHARED); // Take SHARED lock
  xLock(SHARED); // No-op, we already have SHARED
  xLock(PENDING); // Upgrade to PENDING lock
  xUnlock(SHARED); // Drop back down to SHARED lock

then

  xUnlock(NONE); // Completely unlock file

If you can't obtain a requested lock, return SQLITE_BUSY. SQLite may 
retry or may abandon the operation, depending on the configured 
busy-handler or busy-timeout.


Dan.







I haven't finished implementing the first part to get a 'real' sequence...
Also; I suppose I should return SQLITE_BUSY instead of waiting on the lock
myself?  expected return values aren't covered well either.

Dan.




The first few operations are xLock( SQLITE_LOCK_SHARED ) followed by
xUnlock(SQLITE_LOCK_NONE)...

sqlite.h.in
https://github.com/mackyle/sqlite/blob/master/src/sqlite.h.in#L627
where the constants are defined... says 'one of these are used...'

and describing locking levels...
https://github.com/mackyle/sqlite/blob/master/src/sqlite.h.in#L708-L720


https://www.sqlite.org/c3ref/io_methods.html
xLock() increases the lock. xUnlock() decreases the lock. The
xCheckReservedLock() method checks whether any database connection,

either

in this process or in some other process, is holding a RESERVED, PENDING,
or EXCLUSIVE lock on the file. It returns true if such a lock exists and
false otherwise.

https://www.sqlite.org/lockingv3.html
has description of what locking states are intended to be... there's
'UNLOCKED' but not LOCK_NONE

I did find another page that had lock state transitions, and information
about when the busy callback would be called... but that also didn't say
anything about the transition to unlock states...

There was a note around the .C code in the pager unlocking using

LOCK_NONE,

but I cna't find that, it just says, see note above... and that didn't
really say what the intent was.

I searched the nabble mailing list archive for 'SQLITE_LOCK_NONE' and

found

0 hits.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Implementing xLock/xUnlock VFS methods...

2020-01-08 Thread Dan Kennedy


On 8/1/63 20:29, J Decker wrote:

The documentation isn't very clear on what the intent of an xUnlock(
SQLITE_LOCK_NONE ) is intended to do.  Is it unlock everything? Is it the
same as remove a shared lock?


That's right. xUnlock(fd, SQLITE_LOCK_NONE) should completely unlock the 
file. xUnlock(fd, SQLITE_LOCK_SHARED) should fall back to a SHARED lock 
from RESERVED/PENDING/EXCLUSIVE.


Dan.





The first few operations are xLock( SQLITE_LOCK_SHARED ) followed by
xUnlock(SQLITE_LOCK_NONE)...

sqlite.h.in
https://github.com/mackyle/sqlite/blob/master/src/sqlite.h.in#L627
where the constants are defined... says 'one of these are used...'

and describing locking levels...
https://github.com/mackyle/sqlite/blob/master/src/sqlite.h.in#L708-L720


https://www.sqlite.org/c3ref/io_methods.html
xLock() increases the lock. xUnlock() decreases the lock. The
xCheckReservedLock() method checks whether any database connection, either
in this process or in some other process, is holding a RESERVED, PENDING,
or EXCLUSIVE lock on the file. It returns true if such a lock exists and
false otherwise.

https://www.sqlite.org/lockingv3.html
has description of what locking states are intended to be... there's
'UNLOCKED' but not LOCK_NONE

I did find another page that had lock state transitions, and information
about when the busy callback would be called... but that also didn't say
anything about the transition to unlock states...

There was a note around the .C code in the pager unlocking using LOCK_NONE,
but I cna't find that, it just says, see note above... and that didn't
really say what the intent was.

I searched the nabble mailing list archive for 'SQLITE_LOCK_NONE' and found
0 hits.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A hang in Sqlite

2020-01-05 Thread Dan Kennedy


On 6/1/63 13:44, Yongheng Chen wrote:

Hi,

We found a test case that hangs Sqlite:
—
CREATE TEMPORARY TABLE v0 ( v1 INT UNIQUE ) ;
WITH RECURSIVE v0 ( v1 ) AS ( SELECT -128 UNION SELECT v1 + 33 FROM v0 ) SELECT 
'x' from v0;
—

This seems triggering a dead loop. However, since v0 is empty, it might not 
enter a dead loop I think ? We are not sure whether this is a bug.


Not a bug this time, I think.

The "v0" in the CTE refers to the CTE itself, not the empty temporary 
table. This makes it a recursive CTE, and a (very nearly) infinite loop:


  https://www.sqlite.org/lang_with.html

Thanks for all the work you've been doing on SQLite!

Dan.






Yongheng & Rui
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Heap Use After Free In sqlite.

2019-12-27 Thread Dan Kennedy


On 28/12/62 01:58, Yongheng Chen wrote:

Hi,

We found a heap UAF bug in sqlite. Here’s the PoC:

—
CREATE TABLE v0 ( v1 CHECK( CASE v1 WHEN '13' THEN 10 ELSE 10 END ) ) ; CREATE 
TRIGGER x INSERT ON v0 BEGIN INSERT INTO v0 ( v1 , v1 ) SELECT v1 , v1 FROM v0 
WHERE v1 < 10 ON CONFLICT DO NOTHING ; END ; INSERT INTO v0 SELECT * FROM v0 
WHERE v1 OR 0 ; CREATE VIEW v2 ( v3 ) AS WITH x1 AS ( SELECT * FROM v2 ) SELECT v3 
AS x , v3 AS y FROM v2 ; ALTER TABLE zipfile RENAME TO t3 ;
—

This bug exists in both release code and development code. It triggers uaf with 
asan with release code while triggering an assert in the delevelopment code.


Thanks for this. Looks like the asan error was fixed here:

  https://www.sqlite.org/src/info/de6e6d6846d6a41c

The assert() failure is now fixed here:

  https://www.sqlite.org/src/info/d29edef93451cc67

Dan.




Yongheng & Rui
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sanitising user input for FTS5 MATCH parameter

2019-12-21 Thread Dan Kennedy


On 20/12/62 22:03, test user wrote:

Hello,

I have a search box on a website that uses FTS5/MATCH.

MATCH seems to take its own custom language for matching.

1. Is it safe to just pass the users query to MATCH ? via the SQLite bind
FFI?


Users could specify a query that uses excessive resources. In 
particular, prefix searches for very common prefixes on large databases 
can use a lot of memory. I think it's otherwise safe though.



- This would give them full access to the FTS5 matching language.

2. If not, how should I be sanitising user input?

- E.g. How can I transform a string of words and text into a query? What
characters should I be removing or escaping? How can I prevent them using
the FTS5 keywords "AND" "OR" etc?
It really depends on what you want to allow. And how you want the query 
interpreted. If you want all input to be treated as a single phrase, 
enclose it in double-quotes, doubling any embedded " characters SQL 
style. Or, if you wanted the input treated as a list of terms separated 
by implicit AND, split the input on whitespace and then enclose each 
term in double-quotes. Details here:


  https://www.sqlite.org/fts5.html#full_text_query_syntax

Dan.






Thanks
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is the C language standard to which sqlite conforms ?

2019-11-24 Thread Dan Kennedy


On 24/11/62 06:18, Dennis Clarke wrote:

On 11/23/19 4:46 PM, Dan Kennedy wrote:




Some follow up and thank you all for looking at this.

Using this mornings trunk/current/head I do see the tests running well
 with these little exceptions :


boe13$ pwd
/opt/bw/build/sqlite_20191121213415_rhel_74_3.10.0-693.el7.x86_64.006

... build clean as usual :-)

tests run nicely now until ...


Can you run:

   ./testfixture test/journal3.test

and post the output?



It would be my pleasure to get some light tossed on this ... so here is
a very clean compile ( no -std in CFLAGS at all on gcc 9.2.0 ) and the
tests look like so :


This is a test script error. Should now be fixed here:

https://sqlite.org/src/info/b0b655625cf491c8

What version of Tcl are you using?

Thanks,

Dan.





.
.
.
Time: zipfile.test 442 ms
Time: zipfile2.test 45 ms
SQLite 2019-11-21 20:24:04 
ac080432b480062507452d3cdbe6c0f759e6f95b65d9862e0462017405ab2b8e
8 errors out of 250191 tests on boe13.genunix.com Linux 64-bit 
little-endian
!Failures on these tests: journal3-1.2.1.1 journal3-1.2.1.4 
journal3-1.2.2.1 journal3-1.2.2.4 journal3-1.2.3.1 journal3-1.2.3.4 
journal3-1.2.4.1 journal3-1.2.4.4

All memory allocations freed - no leaks
Maximum memory usage: 9267192 bytes
Current memory usage: 0 bytes
Number of malloc()  : -1 calls
gmake: *** [Makefile:1252: tcltest] Error 1
real 420.72
user 383.25
sys 23.47
boe13$

boe13$ ./testfixture test/journal3.test
journal3-1.1... Ok
journal3-1.2.1.1...
! journal3-1.2.1.1 expected: [0o644]
! journal3-1.2.1.1 got:  [00644]
journal3-1.2.1.2... Ok
journal3-1.2.1.3... Ok
journal3-1.2.1.4...
! journal3-1.2.1.4 expected: [0o644]
! journal3-1.2.1.4 got:  [00644]
journal3-1.2.1.5... Ok
journal3-1.2.2.1...
! journal3-1.2.2.1 expected: [0o666]
! journal3-1.2.2.1 got:  [00666]
journal3-1.2.2.2... Ok
journal3-1.2.2.3... Ok
journal3-1.2.2.4...
! journal3-1.2.2.4 expected: [0o666]
! journal3-1.2.2.4 got:  [00666]
journal3-1.2.2.5... Ok
journal3-1.2.3.1...
! journal3-1.2.3.1 expected: [0o600]
! journal3-1.2.3.1 got:  [00600]
journal3-1.2.3.2... Ok
journal3-1.2.3.3... Ok
journal3-1.2.3.4...
! journal3-1.2.3.4 expected: [0o600]
! journal3-1.2.3.4 got:  [00600]
journal3-1.2.3.5... Ok
journal3-1.2.4.1...
! journal3-1.2.4.1 expected: [0o755]
! journal3-1.2.4.1 got:  [00755]
journal3-1.2.4.2... Ok
journal3-1.2.4.3... Ok
journal3-1.2.4.4...
! journal3-1.2.4.4 expected: [0o755]
! journal3-1.2.4.4 got:  [00755]
journal3-1.2.4.5... Ok
SQLite 2019-11-21 20:24:04 
ac080432b480062507452d3cdbe6c0f759e6f95b65d9862e0462017405ab2b8e

8 errors out of 22 tests on boe13.genunix.com Linux 64-bit little-endian
!Failures on these tests: journal3-1.2.1.1 journal3-1.2.1.4 
journal3-1.2.2.1 journal3-1.2.2.4 journal3-1.2.3.1 journal3-1.2.3.4 
journal3-1.2.4.1 journal3-1.2.4.4

All memory allocations freed - no leaks
Memory used:  now  0  max 260520  max-size 12
Allocation count: now  0  max    167
Page-cache used:  now  0  max  0 max-size   1032
Page-cache overflow:  now  0  max   2064
Maximum memory usage: 260520 bytes
Current memory usage: 0 bytes
Number of malloc()  : -1 calls
boe13$

Let me know if there is anything else I can try here.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency Question

2019-11-24 Thread Dan Kennedy


On 24/11/62 00:05, Jens Alfke wrote:

On Nov 23, 2019, at 7:17 AM, Dan Kennedy  wrote:

This should only happen if you are using shared-cache mode. Don't use 
shared-cache mode.

Shared-cache mode also breaks Isolation between connections — during a 
transaction, other connections will see the writer’s intermediate state. (IIRC. 
It’s been a few years.)



Only if you explicitly set "PRAGMA read_uncommitted" I think.

Dan.




In my experience, it’s only useful if all connections are read-only, or if 
you’re willing to use your own mutexes to keep writers from screwing up readers 
(in which case you might as well just share a single connection, right?)

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is the C language standard to which sqlite conforms ?

2019-11-23 Thread Dan Kennedy




Some follow up and thank you all for looking at this.

Using this mornings trunk/current/head I do see the tests running well
 with these little exceptions :



boe13$ pwd
/opt/bw/build/sqlite_20191121213415_rhel_74_3.10.0-693.el7.x86_64.006

... build clean as usual :-)

tests run nicely now until ...


Can you run:

  ./testfixture test/journal3.test

and post the output?

Thanks.





.
.
.
.
Time: zipfile2.test 44 ms
SQLite 2019-11-21 20:24:04 
ac080432b480062507452d3cdbe6c0f759e6f95b65d9862e0462017405ab2b8e
8 errors out of 250191 tests on boe13.genunix.com Linux 64-bit 
little-endian
!Failures on these tests: journal3-1.2.1.1 journal3-1.2.1.4 
journal3-1.2.2.1 journal3-1.2.2.4 journal3-1.2.3.1 journal3-1.2.3.4 
journal3-1.2.4.1 journal3-1.2.4.4

All memory allocations freed - no leaks
Maximum memory usage: 9267208 bytes
Current memory usage: 0 bytes
Number of malloc()  : -1 calls
gmake: *** [Makefile:1252: tcltest] Error 1
real 410.99
user 376.39
sys 20.77
boe13$

Should we be concerned or are these artifacts from trunk/head/current?



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Concurrency Question

2019-11-23 Thread Dan Kennedy


On 23/11/62 17:02, Mario M. Westphal wrote:

I have an issue with concurrency. This may be the intended behavior of
SQLite. Or I'm doing something wrong.

If somebody could shed a light, I would be thankful.

  


I compile and use SQLite on Windows with VStudio.

I compile it with SQLITE_THREADSAFE=1

At runtime, I use sqlite3_open_v2 () and set the flag SQLITE_OPEN_NOMUTEX.

I use SQLite in WAL mode.

My application uses several threads. Each thread opens its own database
connection.

  


Two threads run in parallel.

Thread A does a lengthy UPDATE to table_A (prepared statement). This update
takes, say, 5 seconds.

Thread B uses a prepared statement to SELECT data from an unrelated table_B.

Thread_B is blocked seconds in sqlite3Step because lockBtreeMutex() blocks
in a mutex.

  


I did not expect this.

Why is thread_B blocked when doing a read just because SQLite is writing to
another table?

  


Is this the expected behavior or am I doing something stupid in my code. And
if so, what to check?


This should only happen if you are using shared-cache mode. Don't use 
shared-cache mode.


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What is the C language standard to which sqlite conforms ?

2019-11-21 Thread Dan Kennedy


On 22/11/62 00:06, Jens Alfke wrote:



On Nov 21, 2019, at 7:01 AM, Richard Hipp  wrote:

The memset() just forces the bug to the surface in builds where the
ckmalloc()/ckfree() routines of TCL are using caching that prevents
valgrind/ASAN from seeing the use-after-free.  The memset() is not
part of the bug fixx itself, but is a preventative measure to try to
prevent similar bugs in the future.

This looks wrong to me:

   memset(p->pVfs, 0, sizeof(sqlite3_vfs));
   memset(p, 0, sizeof(Testvfs));
   ckfree((char *)p->pVfs);
   ckfree((char *)p);

The second line zeroes the Testvfs struct pointed to by p;
the third line reads the pVfs field of the struct, which is now NULL,
and then calls free() on that NULL pointer, which is a no-op.
The net result is to leak the heap block pointed to by p->pVfs.

Shouldn't the second and third lines be swapped?



They should indeed.

Thanks,

Dan.




—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Makefile:1256: tcltest] Segmentation fault (core dumped)

2019-11-19 Thread Dan Kennedy


On 20/11/62 11:47, Dennis Clarke wrote:



Still on Red Hat Enterprise Linux here and now using CFLAGS wherein
there is no "std" specified.



The crash below is in test code - it is almost certainly a problem with 
the test scripts. If you can you send me the "test-out.txt" created by 
the [make tcltest] command, we should be able to figure out what is 
going on.


Thanks,

Dan.





I feel as if I am going in circles here however the codebase seems to
compile fine however the testsuite blows up in marvelous ways :

.
.
.
Time: walshared.test 26 ms
# WARNING: This next test takes around 12 seconds
gmake: *** [Makefile:1256: tcltest] Segmentation fault (core dumped)
real 368.93
user 327.66
sys 27.30
boe13$ find . -type f | grep -i 'core'
./testdir/core.8032
boe13$ file ./testdir/core.8032
./testdir/core.8032: ELF 64-bit LSB core file x86-64, version 1 
(SYSV), SVR4-style, from './testfixture 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.00', 
real uid: 16411, effective uid: 16411, real gid: 20002, effective gid: 
20002, execfn: './testfixture', platform: 'x86_64'

boe13$
boe13$ find . -type f -name testfixture -ls
342992 5300 -rwxr-xr-x   1 dclarke  devl  5426184 Nov 20 04:35 
./testfixture

boe13$
boe13$ gdb ./testfixture ./testdir/core.8032
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show 
copying"

and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.006/testfixture...done.

[New LWP 8032]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `./testfixture 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.00'.

Program terminated with signal 11, Segmentation fault.
#0  0x0043c71b in tvfsFileControl (pFile=0x1509af0, op=20, 
pArg=0x7ffde61bc9f8)
    at 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.006/src/test_vfs.c:549

549   if( p->pScript && (p->mask_FCNTL_MASK) ){
Missing separate debuginfos, use: debuginfo-install 
glibc-2.17-196.el7.x86_64 libgcc-4.8.5-16.el7.x86_64

(gdb) where
#0  0x0043c71b in tvfsFileControl (pFile=0x1509af0, op=20, 
pArg=0x7ffde61bc9f8)
    at 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.006/src/test_vfs.c:549
#1  0x0046c4e9 in sqlite3OsFileControl (id=0x1509af0, op=20, 
pArg=0x7ffde61bc9f8) at sqlite3.c:22475
#2  0x0048a47e in databaseIsUnmoved (pPager=0x1509970) at 
sqlite3.c:54928
#3  0x0048a59c in sqlite3PagerClose (pPager=0x1509970, 
db=0x14fbb70) at sqlite3.c:54969
#4  0x0049bf12 in sqlite3BtreeClose (p=0x159ea10) at 
sqlite3.c:66134
#5  0x0054ebac in sqlite3LeaveMutexAndCloseZombie 
(db=0x14fbb70) at sqlite3.c:157429
#6  0x0054eacc in sqlite3Close (db=0x14fbb70, forceZombie=0) 
at sqlite3.c:157372
#7  0x0054eaf0 in sqlite3_close (db=0x14fbb70) at 
sqlite3.c:157385

#8  0x004611cf in DbDeleteCmd (db=0x15bbab8)
    at 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.006/src/tclsqlite.c:528
#9  0x7f9d19e18330 in Tcl_DeleteCommandFromToken 
(interp=0x1921c38, cmd=0x1350f48)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:3184
#10 0x7f9d19e18179 in Tcl_DeleteCommand (interp=0x1921c38, 
cmdName=0x13acd98 "db")

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:3045
#11 0x00464d45 in DbObjCmd (cd=0x15bbab8, interp=0x1921c38, 
objc=2, objv=0x14ded88)
    at 
/opt/bw/build/sqlite-src-3300100_rhel_74_3.10.0-693.el7.x86_64.006/src/tclsqlite.c:2219
#12 0x7f9d19e19e2a in Dispatch (data=0x107d5e0, interp=0x1921c38, 
result=0)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:4418
#13 0x7f9d19e19eb7 in TclNRRunCallbacks (interp=0x1921c38, 
result=0, rootPtr=0x0)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:4451
#14 0x7f9d19e1c815 in TclEvalObjEx (interp=0x1921c38, 
objPtr=0x6161616161616161, flags=0, invoker=0xe87178, word=0)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:6018
#15 0x7f9d19f24152 in SlaveEval (interp=0xe832f8, 
slaveInterp=0x1921c38, objc=1, objv=0xe871f0)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclInterp.c:2826
#16 0x7f9d19f20bac in NRInterpCmd (clientData=0x0, 
interp=0xe832f8, objc=4, objv=0xe871d8)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclInterp.c:885
#17 0x7f9d19e19e2a in Dispatch (data=0x17f8bb0, interp=0xe832f8, 
result=0)

    at /opt/bw/build/nist/tcl8.7a1/generic/tclBasic.c:4418
#18 0x7f9d19e19eb7 in TclNRRunCallbacks (interp=0xe832f8, 
result=0, 

Re: [sqlite] WAL2 mode

2019-11-12 Thread Dan Kennedy


On 12/11/62 19:00, Simon Slavin wrote:

On 12 Nov 2019, at 10:06am, Dan Kennedy  wrote:


This branch might interest you:

   https://www.sqlite.org/src/timeline?r=begin-concurrent-pnu-wal2

" In wal2 mode, the system uses two wal files instead of one. The files are named "-wal" 
and "-wal2""

Could this be changed to -wal1 and -wal2 ?  Or any other suffixes that aren't 
used by a different mode ?

This is to make crash/corruption diagnostics simpler.  At the moment, if the 
database file is so corrupt it can't be opened by SQLite, just by looking at 
the files in the directory I can tell a lot about what journal mode the 
database was in and what was being done, and what the user did to try to 
restore a backup.

But users do all sorts of weird things to try to recover from crashes, 
including restoring a database, sometimes in a different journal mode, but 
leaving journal files in place.  Seeing whether there's a -wal file and/or a 
-wal1 file, and comparing the changedates on the files, will give me better 
clues about what was done.  It means I can get further in figuring out what was 
going on before hexdumping the files concerned.


Fair point.

I think it reuse *-wal in order to avoid an extra call to access() when 
opening a read-transaction in rollback mode. There might be other 
reasons too. It's only a branch for now - this is something to consider 
if it ever gets rolled into the main version though.


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom window functions vs builtin

2019-11-12 Thread Dan Kennedy


On 12/11/62 18:50, Merijn Verstraaten wrote:

I already asked this question without an answer, but as it hidden somewhere 
nested deeply in another thread I think it may have simply gone under the 
radar. Apologies for the duplication if you already saw it!

How is the behaviour of (custom) window functions defined? Specifically, in the 
presence of an ORDER BY on the window.

The functionality of row_number/rank/dense_rank seems to require that xStep and 
xInverse are called on rows in the order specified by ORDER BY. And, indeed, 
the implementation of row_number() in the sqlite source seems to rely on being 
called in the same order as ORDER BY, but at the same time the documentation 
states:

"the built-in window functions, however, require special-case handling in the query 
planner and hence new window functions that exhibit the exceptional properties found in 
the built-in window functions cannot be added by the application."

So does this indeed mean that these builtin ones are handled specially and 
other windows functions have to accept/deal with having their window arguments 
added/removed in an arbitrary order?


No, they don't need to handle that. The rows will always be 
added/removed in ORDER BY order for all window functions (and will be 
removed in the same order in which they were added if the ORDER BY order 
is ambiguous).


There is some special handling for the various built-in window functions 
though. For example, most of them ignore the window type. You can't 
implement percent_rank(), cume_dist() or ntile() without knowing the 
number of rows in the partition before returning any values, so that 
requires special handling as well. There are probably other things too...


Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] proposal for write-lock on "commit" rather than "begin transaction"

2019-11-12 Thread Dan Kennedy


On 25/10/62 23:07, Brannon King wrote:

This is a request for a small change to the handling of multiple
connections. I think it would significantly enhance the usefulness there
via allowing multiple "views" of the data.

Consider that I have two simultaneous connections to one file, named Con1
and Con2. They could be in one process or one thread -- that's irrelevant.
Either one may write to the DB; we don't know yet. For starters, assume
that their journal mode is MEMORY.

Both connections begin with "begin transaction". Already I'm dead in the
water; one of those will fail presently with "database is locked". But it
doesn't need to be that way! Each connection can have its own journal file,
especially if it's in memory. Once one connection commits, the other
connection will no longer be allowed to commit. It will be forced to
rollback (or perhaps rebase if there are no conflicts).

Multiple WAL files could be supported in a similar fashion; they just need
some kind of unique naming scheme. For recovery, the user would be prompted
to select one or none. It doesn't seem that far from Sqlite's current
behavior. Thoughts?


This branch might interest you:

  https://www.sqlite.org/src/timeline?r=begin-concurrent-pnu-wal2

The "BEGIN CONCURRENT" idea is that two connections may concurrently 
have independent write transactions based on optimistic read/write page 
locking.


Dan.





~Brannon
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-src-3300100 on RHEL 7.4 toss mad errors about 'asm'

2019-11-11 Thread Dan Kennedy


On 8/11/62 00:15, Dennis Clarke wrote:

On 2019-11-07 11:44, Shawn Wagner wrote:
... Just don't use strict c99 mode when compiling with gcc? Drop the 
-std

argument from your CFLAGS to use the default (gnu11 since gcc 5) or
explicitly use gnu99, which gives you that version of the C standard 
+ gcc

extensions.

(Not that they have anything to do with the problem, but compiling 
with -O0

and -fno-builtin are strange unless you're planning on spending some
quality time in a debugger stepping through code, and -malign-double is
already the default on x86-64 so kind of pointless)



Debugger .. yes. That will happen and I build on a multitude of
platforms.

OKay so the code fails on Solaris sparc with c99 whereas in the recent
past it all builds fine :

libtool: compile:  /opt/developerstudio12.6/bin/c99 
-I/usr/local/include -D_TS_ERRNO -D_POSIX_PTHREAD_SEMANTICS 
-D_LARGEFILE64_SOURCE -Xc -m64 -xarch=sparc -g -errfmt=error 
-errshort=full -xstrconst -xildoff -xmemalign=8s -xnolibmil 
-xcode=pic32 -xregs=no%appl -xlibmieee -mc -ftrap=%none 
-xbuiltin=%none -xunroll=1 -xs -xdebugformat=dwarf -errtags=yes 
-errwarn=%none -erroff=%none -DSQLITE_OS_UNIX=1 -I. 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/rtree 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/icu 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/fts3 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/async 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/session 
-I/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/ext/userauth 
-D_HAVE_SQLITE_CONFIG_H -DBUILD_sqlite -DNDEBUG -I/usr/local/include 
-DSQLITE_THREADSAFE=1 -DSQLITE_HAVE_ZLIB=1 -DUSE_TCL_STUBS=1 -c 
/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c 
 -KPIC -DPIC -o .libs/tclsqlite.o
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2346: error: undefined symbol: SQLITE_DBCONFIG_ENABLE_VIEW
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2346: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2351: error: undefined symbol: SQLITE_DBCONFIG_TRIGGER_EQP
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2351: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2352: error: undefined symbol: SQLITE_DBCONFIG_RESET_DATABASE
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2352: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2353: error: undefined symbol: SQLITE_DBCONFIG_DEFENSIVE
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2353: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2354: error: undefined symbol: SQLITE_DBCONFIG_WRITABLE_SCHEMA
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2354: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2355: error: undefined symbol: SQLITE_DBCONFIG_LEGACY_ALTER_TABLE
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2355: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2356: error: undefined symbol: SQLITE_DBCONFIG_DQS_DML
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2356: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2357: error: undefined symbol: SQLITE_DBCONFIG_DQS_DDL
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2357: error: non-constant initializer: op "NAME"
"/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c", 
line 2855: error: undefined symbol: SQLITE_DIRECTONLY
c99: acomp failed for 
/usr/local/build/sqlite-src-3300100_Oracle_sparc64vii+.001/src/tclsqlite.c

gmake: *** [Makefile:1029: tclsqlite.lo] Error 1



On Red Hat Enterprise Linux 7.4 the code actually does compile and then
core dumps with a segfault from with that same source file :

Time: walshared.test 24 ms
# WARNING: This next test takes around 12 seconds
gmake: *** [Makefile:1256: tcltest] Segmentation fault (core dumped)



This is almost certainly an issue with the test scripts, not the 
library. Can you post the last 100 lines or so of the file 
"test-out.txt" that was created in the cwd by the [make quicktest] or 
whatever you ran to get this?


Thanks,

Dan.



___

Re: [sqlite] Prepared statements in FTS MATCH queries

2019-11-01 Thread Dan Kennedy


On 1/11/62 19:15, mailing lists wrote:

Thanks.

Is there a difference between these statements with respect results, 
performance etc. or are both statements describe the same thing?



They're the same in all important respects.

Dan.





a) SELECT * FROM NamesFTS WHERE FTS MATCH 'LastName:alpha FirstNames:beta';
b) SELECT * FROM NamesFTS WHERE LastName MATCH 'alpha' AND FirstNames MATCH 
'beta';

Regards,
Hartwig


Am 2019-11-01 um 07:55 schrieb Dan Kennedy mailto:danielk1...@gmail.com>>:


On 1/11/62 03:03, mailing lists wrote:

Hi Dan,

I did not know that. What was the reason that it did not work before 3.30?


The implementation of the xBestIndex method of fts3/4, and fts5 prior to 
3.30.0, only allowed a single MATCH constraint to be processed and passed 
through to xFilter. Fts5 now uses a more complicated method to pass constraints 
between those two methods, which allows the details of multiple MATCH 
constraints to be passed through.

Dan.





Regards,
Hartwig


Am 2019-10-31 um 19:16 schrieb Dan Kennedy mailto:danielk1...@gmail.com>>:


On 1/11/62 00:32, mailing lists wrote:

For normal tables I can use something like:

SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;

For FTS tables I can only use

SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not 
supported (still do not know why)

Is there any possibility to use prepared statements for FTS tables with an AND 
condition? I like to prevent code injection.

As of 3.30.0, should work with FTS5.

Dan.



Regards,
Hartwig

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org 
<mailto:sqlite-users@mailinglists.sqlite.org>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org 
<mailto:sqlite-users@mailinglists.sqlite.org>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org 
<mailto:sqlite-users@mailinglists.sqlite.org>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 
<http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users>

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org 
<mailto:sqlite-users@mailinglists.sqlite.org>
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 
<http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users>

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements in FTS MATCH queries

2019-11-01 Thread Dan Kennedy


On 1/11/62 03:03, mailing lists wrote:

Hi Dan,

I did not know that. What was the reason that it did not work before 3.30?



The implementation of the xBestIndex method of fts3/4, and fts5 prior to 
3.30.0, only allowed a single MATCH constraint to be processed and 
passed through to xFilter. Fts5 now uses a more complicated method to 
pass constraints between those two methods, which allows the details of 
multiple MATCH constraints to be passed through.


Dan.






Regards,
Hartwig


Am 2019-10-31 um 19:16 schrieb Dan Kennedy :


On 1/11/62 00:32, mailing lists wrote:

For normal tables I can use something like:

SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;

For FTS tables I can only use

SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not 
supported (still do not know why)

Is there any possibility to use prepared statements for FTS tables with an AND 
condition? I like to prevent code injection.

As of 3.30.0, should work with FTS5.

Dan.



Regards,
Hartwig

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements in FTS MATCH queries

2019-10-31 Thread Dan Kennedy


On 1/11/62 00:32, mailing lists wrote:

For normal tables I can use something like:

SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;

For FTS tables I can only use

SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not 
supported (still do not know why)

Is there any possibility to use prepared statements for FTS tables with an AND 
condition? I like to prevent code injection.


As of 3.30.0, should work with FTS5.

Dan.




Regards,
Hartwig

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Roadmap?

2019-10-20 Thread Dan Kennedy


On 20/10/62 14:53, Thomas Kurz wrote:

I'd kindly ask whether there is some sort of roadmap for SQLite development?

Someone recently pointed out how much he loves the "lite" and well-thought features. I cannot see 
that: I observe that many "playground" gadgets keep being implemented (like virtual columns, 
virtual tables, FTS3/4/5, ...), where one might wonder about their relationship to "Liteness",
whereas other features, essential basics of the SQL standards, are still 
missing and there is no indication they are to be added.


Feel free to make suggestions. Which missing feature or features causes 
you the most bother?


Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Count Rows Inserted into FTS5 Index w/ External Content Table

2019-10-20 Thread Dan Kennedy


On 19/10/62 06:31, Ben Asher wrote:

Hello! I'm trying to write some code to keep an external content table in
sync with the index. To do this, I need to be able to get some state about
the index: either how many rows have been inserted so far or the max rowid
that has been inserted into the index. However, if I try to run queries
(where "fts" is the name of the FTS5 virtual table) like "SELECT COUNT(*)
FROM fts" or "SELECT MAX(rowid) FROM fts", the result always ends up being
an answer as if I had run those queries on the external content table. Is
there some other way I can query the state in this case?


You can, but it's not quite as easy as it could be. Assuming you're not 
using the "columnsize=0" option, the xRowCount() API, which is only 
available from within an auxiliary function returns the value you want:


https://sqlite.org/fts5.html#xRowCount

The xRowCount() will just fall back to the "SELECT count(*)..." method, 
which will report the number of rows in the external content table, not 
the number of rows that have been added to the index.


So if you're prepared to write an auxiliary fts5 function in C that 
invokes the xRowCount() API and returns its value, you could do 
something like:


  SELECT row_count(text) FROM text LIMIT 1;

Assuming your fts5 table is named "text". Or, if you include the 
"matchinfo" demo code in your app from here:


https://sqlite.org/src/artifact/08c11ec968148d4c

in your build:

  SELECT matchinfo(text, 'n') FROM text LIMIT 1;

Both of these will only work if there is at least one row in the 
external content table (as otherwise the query will return zero rows). 
If your external content table is sometimes empty, you'll have to figure 
out a full-text query that always returns at least one row to use.


Another way to go, if you're a risk-taking sort, is to read the same 
value directly from the fts5 shadow tables. The query:


  SELECT block FROM test_data WHERE id=1;

returns a blob that consists of a series of SQLite varints. The first of 
those varints is the number of rows in the index. Again, assuming your 
fts5 table is named "test".


This isn't actually a public interface, so you might get burned if it 
changes at some point. I think that's pretty unlikely, but no promises!


On consideration, I'd probably go for the direct query on the test_data 
table. Not least because if you use the xRowCount() API from a 
non-full-text query you will need this fix, which won't be released 
until 3.31.0:


  https://sqlite.org/src/info/b528bdcd45db1b78

But have a test in place to ensure it doesn't break when you upgrade 
SQLite. And if you can, build SQLite directly into the app (almost 
always a good idea), don't use the system version.


Good luck,

Dan.







Thanks!

Ben
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An inconsistency between sqlite implementation and documentation of zlib API deflate

2019-10-11 Thread Dan Kennedy


On 11/10/62 06:18, 吕涛 wrote:

Hi, there is an inconsistency between sqlite implementation and documentation 
of zlib API deflate in the file /ext/misc/zipfile.c:1003.




1000 deflateInit2(, 9, Z_DEFLATED, -15, 8, Z_DEFAULT_STRATEGY);
1001 res = deflate(, Z_FINISH);
1002
1003 if( res==Z_STREAM_END ){
1004   *ppOut = aOut;
1005   *pnOut = (int)str.total_out;
1006 }else{
1007   sqlite3_free(aOut);
1008   *pzErr = sqlite3_mprintf("zipfile: deflate() error");
1009   rc = SQLITE_ERROR;
1010 }
1011 deflateEnd();




According to the description of zlib API deflate, when calling deflate with 
parameter Z_FINISH, it must be called again with much more output space if 
returning Z_OK or Z_BUF_ERROR. However, in the implementation of sqlite, it 
didn't.


Thanks for looking into this. I think it's Ok because the output buffer 
is always at least compressBound() bytes in size.


Technically the maximum size of an output buffer populated by deflate() 
is deflateBytes(). But looking at the implementations, compressBound() 
is always larger than the equivalent deflateBound() value would be for 
the way we're configuring deflate(). Still, to be correct, it's now 
changed to use deflateBound() to size the buffer here:


  https://sqlite.org/src/info/f5ee30426e8876e70304

Dan.








The description of zlib API deflate is shown as bellow:

If the parameter flush is set to Z_FINISH, pending input is processed, pending 
output is flushed and deflate returns with Z_STREAM_END if there was enough 
output space. If deflate returns with Z_OK or Z_BUF_ERROR, this function must 
be called again with Z_FINISH and more output space (updated avail_out) but no 
more input data, until it returns with Z_STREAM_END or an error. After deflate 
has returned Z_STREAM_END, the only possible operations on the stream are 
deflateReset or deflateEnd.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Group-by and order-by-desc does not work as expected

2019-09-21 Thread Dan Kennedy


On 22/9/62 02:25, Fredrik Larsen wrote:

Interesting, very similar change but not fully idenctial. In my patch, I
created a sqlite3ExprListCompareIgnoreButUpdateSort, and used this function
from line 6239. This function ignores the sort part when comparing
expressions, but will update the GroupBy sortOrder field if expressions are
found equal. I see dan modifies the sortFlag.


That sounds equivalent to me. The sortOrder/sortFlag thing is probably 
just because you patched the last release (3.29.0) or earlier. The field 
has changed names since then.


Dan.




Would be interesting to know
what effect this difference has. My change works in my test, but not sure
if it really works.

Anyway, super-nice that this issue is fixed officially. No I don't have to
wonder if my fix is really correct, or will suddenly corrupt something :)

Fredrik

On Sat, Sep 21, 2019 at 8:49 PM Keith Medcalf  wrote:


See Dan's checkin on trunk for this issue.

https://www.sqlite.org/src/info/20f7951bb238ddc0


-Original Message-
From: sqlite-users  On
Behalf Of Fredrik Larsen
Sent: Saturday, 21 September, 2019 08:12
To: SQLite mailing list 
Subject: Re: [sqlite] [EXTERNAL] Group-by and order-by-desc does not work
as expected

Your last sentence got me thinking. So I downloaded the source, modified
the ordering of the GROUP-BY expression to match ORDER-BY and it works!
This will offcourse only work if the GROUP-BY and ORDER-BY matches
generally expect for the direction. This fix only improves performance
for
relevant cases and keeps other cases unaffected. Not sure if I introduced
some subtle bugs with this modification, but my test-cases runs fine.

Fredrik

On Fri, Sep 20, 2019 at 6:57 PM Keith Medcalf 
wrote:


We can observe GROUP BY works ASCending only as of now. Why it can't

work

DESCending to avoid ordering, that's a different question.

>From https://www.sqlite.org/lang_select.html we can observe that

GROUP BY takes an expr on the RHS, while ORDER BY takes an expr
followed by optional COLLATE and ASC/DESC terms.

The GROUP BY clause does not imply ordering.  The fact that the output

is

ordered is an implementation detail -- the grouping could be

implemented by

a hash table, in which case the output would be ordered by hash value,

for

instance.  All that the expression in a GROUP BY does is determine the
groupings, and therefore the expression is limited to a comparison
compatible expression.  For example, you can GROUP BY x COLLATE NOCASE
which implies that the groups are formed using case insensitive

comparisons

of x.  The ORDER BY clause determines the output ordering.

You will note that if you do the following:

create table x(x,y);
create index ix on x(x desc, y);
select x, someaggregate(y) from x group by x order by x desc;

then ix will be used as a covering index (which is good) however the

group

by x is treated as an ordering expression, not as simply a grouping
expression.

In fact the code that implements the group by does indeed (perhaps
erroneously) treat the group by expression as implying order, since it

will

traverse the covering index in reverse order so that the output from

GROUP

BY is in ascending order, and add an extra sort to do the ORDER BY.

That means the GROUP BY code generator is already capable of traversing
the selected index in reverse order when necessary.  It appears that

the

optimizer however does not recognize that the "desc" attribute from the
order by can be "pushed down" into the GROUP BY (which really is

ordering

as an implementation detail) thus eliminating the ORDER BY processing
entirely.

Note that you cannot specify that the GROUP BY is ordering -- it will

not

accept the ASC or DESC keywords (which is correct), and this should not

be

changed, however, treating it as being ordering when it is not might
perhaps be a defect ...



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Undo sqlite3_snapshot_open?

2019-09-19 Thread Dan Kennedy


On 19/9/62 18:13, Gwendal Roué wrote:

Hello,

I am looking at the snapshot experimental APIs, and it looks like once a
connection has been sent to an "historical snapshot" with
sqlite3_snapshot_open (https://www.sqlite.org/c3ref/snapshot_open.html),
the connection can never be restored back to regular operations.

Is it correct?


I don't think so.

If you end the transaction opened with sqlite3_snapshot_open() (by 
executing a "COMMIT" or "ROLLBACK" and calling sqlite3_reset() or 
sqlite3_finalize() on all active SELECT statements) then open a new 
transaction, the new transaction accesses the latest database snapshot - 
just as if you had never used sqlite3_snapshot_open() with the 
connection at all.


Dan.





Thanks is advance,
Gwendal Roué
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug: WINDOW clause within a CTE

2019-09-15 Thread Dan Kennedy


On 15/9/62 11:57, Jake Thaw wrote:

The following examples demonstrate a possible bug when using a WINDOW
clause within a CTE:

SQLite version 3.30.0 2019-09-14 16:44:51
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE x AS SELECT 1 a UNION SELECT 2;
sqlite>
sqlite> -- Unexpected result - expect 1,1
sqlite> WITH y AS (
...> SELECT Row_Number() OVER (win) FROM x WINDOW win AS (PARTITION BY a))
...> SELECT * FROM y;
1
2
sqlite> -- Unexpected result - expected "Error: no such column: fake_column"
sqlite> WITH y AS (
...> SELECT Row_Number() OVER (win) FROM x WINDOW win AS (PARTITION
BY fake_column))
...> SELECT * FROM y;
1
2



Thanks for tracking down and reporting these. Now fixed here:

  https://sqlite.org/src/info/ca564d4b5b19fe56


sqlite> -- Possible unexpected result - expected "Error: no such
column: fake_column"
sqlite> SELECT 1 WINDOW win AS (PARTITION BY fake_column);
1


I think we'll leave this one as is. SQLite only resolves the references 
in the WINDOW clause if it is used, so this doesn't produce an error. 
There are few other scenarios SQLite does this too. The statement 
"SELECT (0 AND fake_column);", for example.


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Window functions

2019-09-04 Thread Dan Kennedy


On 5/9/62 00:13, David Raymond wrote:

Kind of annoying that when the author shows a screenshot of the sample data he's using 
for his queries that he doesn't include 2 of the fields that are in the queries. Makes it 
harder to "play along at home"

For their ntile example (on page2) I don't think I've seen a window function used with a 
"group by". Does the ntile un-group the groups? Something just looks wrong 
there between the query and the results shown below it. But like you I don't know enough 
to say if that's right or if it's on crack.


You can run window functions on aggregate queries. The windowing step 
occurs logically after the aggregation.


It still looks right to me. Each output row contains a unique 
combination of territoryid/customerid, so no need for any "un-grouping". 
Of course, the input data doesn't feature any rows with duplicate 
territoryid/customerid values, so running the query without the GROUP BY 
and replacing "sum(subtotal)" with "subtotal" would produce the same 
results.


Dan






-Original Message-
From: sqlite-users  On Behalf Of 
Simon Slavin
Sent: Wednesday, September 04, 2019 12:15 PM
To: SQLite mailing list 
Subject: [sqlite] Window functions

I ran into this two-part article, probably on Hacker News:



I tried comparing it with



but I don't know enough to be able to tell whether the language used in the 
article is compatible with the way window functions are implemented in SQLite.  
Could someone who knows more than I do take a look and post a summary ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Window functions

2019-09-04 Thread Dan Kennedy


On 4/9/62 23:14, Simon Slavin wrote:

I ran into this two-part article, probably on Hacker News:



I tried comparing it with



but I don't know enough to be able to tell whether the language used in the 
article is compatible with the way window functions are implemented in SQLite.  
Could someone who knows more than I do take a look and post a summary ?


I only skimmed it, but I think everything there is applicable to SQLite.

Although I think she's using "window frame" differently to the way we 
do. Not that it matters too much, as the term only occurs once in each 
of the two blog entries anyway.


Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query planner regression with FTS4: slower path is chosen

2019-09-04 Thread Dan Kennedy


On 2/9/62 16:57, Paul wrote:

I has been a while without response, so I just bumping this message.


19 July 2019, 14:21:27, by "Paul" :


I have a test case when the regression can be observed in queries that
use JOINs with FTS4 tables, somewhere in between 3.22.0 and 3.23.0.
For some reason the planner decides to search non-FTS table first then
scan the whole FTS table. Version 3.22.0 is the last unaffected, while
issue is still present in HEAD.

Probably it has something to do with a fact that, according to EXPLAIN,
new version of planner ignores LEFT join and considers it just a JOIN.



Suspect that that is the change. The LEFT JOIN is equivalent to a 
regular join in this case due to the "bar = 1" term in the WHERE clause.


Running ANALYZE after the index is created in the example script causes 
SQLite to pick a better plan.


Or, changing the LEFT JOIN to CROSS JOIN works to force SQLite to pick 
the plan you want.


FTS5 does a little better with the query, but only because it runs 
faster - it still picks the slow plan. There might be room to improve 
this in FTS5, but probably not for FTS3/4, which are now focused very 
much on backwards compatibility.


Dan.





At least it feels that way, anyway.

Test case:


CREATE VIRTUAL TABLE search USING FTS4(text);

WITH RECURSIVE
  cnt(x) AS (
 SELECT 1
 UNION ALL
 SELECT x+1 FROM cnt
  LIMIT 2000
  )
INSERT INTO search(docid, text) SELECT x, 'test' || x FROM cnt;

CREATE TABLE foo(s_docid integer primary key, bar integer);

WITH RECURSIVE
  cnt(x) AS (
 SELECT 1
 UNION ALL
 SELECT x+1 FROM cnt
  LIMIT 2000
  )
INSERT INTO foo(s_docid, bar) SELECT x, 1 FROM cnt;

.timer on

-- Fast
SELECT COUNT() FROM search LEFT JOIN foo
  ON s_docid = docid
  WHERE bar = 1 AND search MATCH 'test*';

-- Fast
SELECT COUNT() FROM foo
  WHERE bar = 1
  AND s_docid IN (
SELECT docid FROM search WHERE search MATCH 'test*'
  );

-- Create index, as some real-life queries use searches by `bar`
CREATE INDEX foo_bar_idx ON foo (bar);

-- Slow
SELECT COUNT() FROM search LEFT JOIN foo
  ON s_docid = docid
  WHERE bar = 1 AND search MATCH 'test*';

-- As fast as before (current workaround)
SELECT COUNT() FROM foo
  WHERE bar = 1
  AND s_docid IN (
SELECT docid FROM search WHERE search MATCH 'test*'
  );


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation misunderstanding or bug (FTS3 and "^" character)?

2019-08-30 Thread Dan Kennedy


On 30/8/62 17:39, mailing lists wrote:

Hi,

but there is no token '^beta'! Even if in FTS3 '^' is not regarded as a special 
character. Is the '^' removed by the tokenizer?


Right. It's removed by the tokenizer.

FTS3/4 is focused on backwards compatibility. New work should really use 
FTS5 - it's faster, has more features, is better tested and the query 
syntax is properly defined.


https://sqlite.org/fts5.html

Dan.




Because it is regarded as a diacritical character?

PS: I have to admit that I have overlooked the comment that '^' works only for 
FTS4 tables.

Regards,
Hardy


Am 2019-08-30 um 12:24 schrieb Dan Kennedy :



The fts3/4 documentation says:

"If the FTS table is an FTS4 table (not FTS3), a token may also be prefixed with a 
"^" character. In this case, in order to match the token must appear as the very first 
token in any column of the matching row."

So change "FTS3" to "FTS4" and it will likely work.

Dan.


On 30/8/62 16:31, mailing lists wrote:

Hi,

I could not find an example showing a result in the documentation, therefore I 
created one by myself:

CREATE VIRTUAL TABLE myData USING FTS3(content);
INSERT INTO myData 'alpha beta';

1)
SELECT * FROM myData WHERE myData MATCH 'beta';

Result:

content
alpha beta

This is what I expected.

2)
SELECT * FROM myData WHERE myData MATCH '^beta';

Result:

content
alpha beta

This is what I did not expect. According to the documentation I expected to get 
no rows because the column begins with alpha and not with beta.

What's the issue here? I tested these examples using version 3.24.0.

Regards,
Hardy

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation misunderstanding or bug (FTS3 and "^" character)?

2019-08-30 Thread Dan Kennedy



The fts3/4 documentation says:

"If the FTS table is an FTS4 table (not FTS3), a token may also be 
prefixed with a "^" character. In this case, in order to match the token 
must appear as the very first token in any column of the matching row."


So change "FTS3" to "FTS4" and it will likely work.

Dan.


On 30/8/62 16:31, mailing lists wrote:

Hi,

I could not find an example showing a result in the documentation, therefore I 
created one by myself:

CREATE VIRTUAL TABLE myData USING FTS3(content);
INSERT INTO myData 'alpha beta';

1)
SELECT * FROM myData WHERE myData MATCH 'beta';

Result:

content
alpha beta

This is what I expected.

2)
SELECT * FROM myData WHERE myData MATCH '^beta';

Result:

content
alpha beta

This is what I did not expect. According to the documentation I expected to get 
no rows because the column begins with alpha and not with beta.

What's the issue here? I tested these examples using version 3.24.0.

Regards,
Hardy

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inverted changesets and UNIQUE constraints

2019-08-26 Thread Dan Kennedy


On 26/8/62 15:12, Daniel Kraft wrote:

Hi!

I'm using the SQLite session extension to create changesets, invert them
and apply them to undo previous changes in the database.  (Essentially
what I need to do is persistent savepoints.)

This works well so far, but I recently wondered about the interaction
with UNIQUE constraints.  In particular, let's say that I record a
changeset of the following modification:  I remove some rows from a
table, and *then* I insert new rows that have the same values in UNIQUE
columns as the previously removed ones.  That obviously works fine
because I delete first and *then* insert.

However, my understanding of how the session extension works (according
to its docs) is that when I invert and apply the changeset, it will
*first* insert the deleted rows, and *then* delete the inserted ones.
(Because it inverts each operation but not the order of them.)

I created a simplified example program that performs exactly this on an
in-memory database.  The code (plus the shell script to build it with
the flags I used) is below and can also be found on Github:

   https://gist.github.com/domob1812/a3b78739772a1ff0c001be6cbc600f17

(Perhaps that is easier to view than in the email.)

When I run that on my system (with SQLite 3.16.2 from Debian Stretch), I
get errors printed from ErrorLogger like this:

E0826 10:08:16.840441 21876 test.cpp:16] SQLite error (code 2067): abort
at 16 in [INSERT INTO main."mytable" VALUES(?, ?)]: UNIQUE constraint
failed: mytable.value
E0826 10:08:16.840520 21876 test.cpp:16] SQLite error (code 2067): abort
at 16 in [INSERT INTO main."mytable" VALUES(?, ?)]: UNIQUE constraint
failed: mytable.value

This matches what I expect.  However, it seems that applying the
changeset still works fine, and I get the correct "old" state restored.

Is this just "by chance", or can I safely ignore these errors in this
context and rely on the ability to apply inverted changesets even if
they (intermittently) violate UNIQUE constraints?


They can be ignored I think.

When sqlite3changeset_apply() hits a UNIQUE constraint, it puts the 
change into a "retry buffer". Then, once it has attempted all changes in 
the changeset, it goes back and retries those in the retry buffer. It 
keeps retrying like this until no further progress can be made. So in a 
case like yours - where there does exist an order in which the changes 
can be successfully applied without hitting constraints - it eventually 
succeeds in applying the entire changeset. But, as the sessions module 
works through the normal SQL interface, each time it hits an 
intermittent constraint, an error message is emitted on the log.


Dan.






Thanks a lot for any insights!

Yours,
Daniel

=
Build script:

#!/bin/sh -e

PKGS="sqlite3 libglog"
CFLAGS="`pkg-config --cflags ${PKGS}` -std=c++14 -Wall -Werror -pedantic"
CFLAGS="${CFLAGS} -std=c++14 -Wall -Werror -pedantic"
CFLAGS="${CFLAGS} -DSQLITE_ENABLE_SESSION -DSQLITE_ENABLE_PREUPDATE_HOOK"
LIBS=`pkg-config --libs ${PKGS}`

g++ ${CFLAGS} ${LIBS} test.cpp -o test

=
test.cpp:

/* Test code for UNIQUE keys and inverting SQLite changesets.  */

#include 

#include 

#include 
#include 

namespace
{

void
ErrorLogger (void* arg, const int errCode, const char* msg)
{
   LOG (ERROR) << "SQLite error (code " << errCode << "): " << msg;
}

void
Execute (sqlite3* db, const std::string& sql)
{
   VLOG (1) << "Executing SQL:\n" << sql;
   char* err;
   const int rc = sqlite3_exec (db, sql.c_str (), nullptr, nullptr, );
   if (rc != SQLITE_OK)
 LOG (FATAL) << "SQL error: " << err;
   sqlite3_free (err);
}

void
Print (sqlite3* db)
{
   const std::string sql = R"(
 SELECT `id`, `value`
   FROM `mytable`
   ORDER BY `id` ASC
   )";

   sqlite3_stmt* stmt;
   CHECK_EQ (sqlite3_prepare_v2 (db, sql.c_str (), sql.size (), ,
nullptr),
 SQLITE_OK);

   while (true)
 {
   const int rc = sqlite3_step (stmt);
   if (rc == SQLITE_DONE)
 break;
   CHECK_EQ (rc, SQLITE_ROW);

   LOG (INFO)
   << "  Row: (" << sqlite3_column_int (stmt, 0)
   << ", " << sqlite3_column_int (stmt, 1) << ")";
 }

   CHECK_EQ (sqlite3_finalize (stmt), SQLITE_OK);
}

int
AbortOnConflict (void* ctx, const int conflict, sqlite3_changeset_iter* it)
{
   LOG (ERROR) << "Changeset application has conflict of type " << conflict;
   return SQLITE_CHANGESET_ABORT;
}

} // anonymous namespace

int
main ()
{
   LOG (INFO)
   << "Using SQLite version " << SQLITE_VERSION
   << " (library version: " << sqlite3_libversion () << ")";

   CHECK_EQ (sqlite3_config (SQLITE_CONFIG_LOG, , nullptr),
 SQLITE_OK);

   sqlite3* db;
   CHECK_EQ (sqlite3_open (":memory:", ), SQLITE_OK);
   LOG (INFO) << "Opened in-memory database";

   Execute (db, R"(
 CREATE TABLE `mytable`
   (`id` INTEGER PRIMARY KEY,
`value` INTEGER,
UNIQUE (`value`));
 INSERT INTO 

Re: [sqlite] [FTS5] Potential table name escape issue with ORDER BY rank

2019-08-20 Thread Dan Kennedy


On 21/8/62 01:06, Matt Haynie wrote:

Hello sqlite-users,

Apologies if this isn’t formatted correctly, I’m not used to submitting bug 
reports via mailing lists.

Although I’m sure some people will be shaking their head, I chose to use 
periods between words in table names. I’ve been careful to escape table names 
everywhere, so this has worked out fine for the most part. However, there is an 
issue when attempting to search an FTS5 table with ORDER BY rank. From the 
below example:

SELECT * FROM "My.Table" WHERE Text MATCH 'table' ORDER BY rank;  -- BUG: near 
"Table": syntax error



Thanks for the thorough bug report. Now fixed here:

  https://sqlite.org/src/info/00e9a8f2730eb723

Dan.




As the comment indicates, this produces the message “near ‘Table’: syntax 
error”. My armchair debugging skills are telling me that it seems to be an 
issue with the table name not being properly escaped. It’s my understanding 
that using ORDER BY bm25(`My.Table`) should be functionally identical to ORDER 
BY rank:

SELECT * FROM "My.Table" WHERE Text MATCH 'table' ORDER BY bm25(`My.Table`);  
-- Works fine



Full example below (Ctrl+F “BUG” for the line that produces the error)

-- Create table
CREATE VIRTUAL TABLE "My.Table" USING fts5(Text);

-- Insert some data
INSERT INTO "My.Table" VALUES ('hello this is a test');
INSERT INTO "My.Table" VALUES ('of trying to order by');
INSERT INTO "My.Table" VALUES ('rank on an fts5 table');
INSERT INTO "My.Table" VALUES ('that have periods in');
INSERT INTO "My.Table" VALUES ('the table names.');
INSERT INTO "My.Table" VALUES ('table table table');

-- Search FTS table - works fine
SELECT * FROM "My.Table" WHERE Text MATCH 'table';

-- Search FTS table with ordering
SELECT * FROM "My.Table" WHERE Text MATCH 'table' ORDER BY bm25(`My.Table`);  
-- Works fine
SELECT * FROM "My.Table" WHERE Text MATCH 'table' ORDER BY rank;  -- BUG: near 
"Table": syntax error

-- Change the table name to remove the period
ALTER TABLE "My.Table" RENAME TO "My_Table";

-- Search FTS table - all of these work perfectly now
SELECT * FROM "My_Table" WHERE Text MATCH 'table';
SELECT * FROM "My_Table" WHERE Text MATCH 'table' ORDER BY rank;
SELECT * FROM "My_Table" WHERE Text MATCH 'table' ORDER BY bm25(`My_Table`);

Thanks,
Matt

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] escaping search terms in queries with bind params

2019-08-07 Thread Dan Kennedy


On 7/8/62 13:50, P Kishor wrote:

Using FTS5 (sqlite3 3.29.x), the following works


SELECT Count(id) AS c FROM t JOIN v ON t.id = v.id WHERE v MATCH 'Trematoda 
awaiting allocation’;

but the following fails


SELECT Count(id) AS c FROM t JOIN v ON t.id = v.id WHERE v MATCH 'Trematoda 
(awaiting allocation)’;

Error: fts5: syntax error near “"

Since I am doing these queries in a program, and I can’t predict what 
characters might be present in my search term, how can I properly escape the 
query so the following works (showing JavaScript syntax below)

function res(q) {
const s = 'SELECT Count(id) AS c FROM t JOIN v ON t.id = v.id WHERE v 
MATCH ?’;
return db.prepare(s).get(q);
}

res('Trematoda (awaiting allocation)’);


Define "works". What do you want it to do for this input?

One approach would be to strip out all characters that may not be part 
of fts5 barewords before passing the query to fts5:


https://sqlite.org/fts5.html#fts5_strings

Or you could try the query as input first, then strip out the special 
characters and try again only if the first attempt failed - so that 
users could use advanced syntax if they get it right.


Dan.









--
Puneet Kishor
Just Another Creative Commoner
http://punkish.org/About

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.29.0 .recover command

2019-08-06 Thread Dan Kennedy


On 6/8/62 17:26, Olivier Mascia wrote:

On one database instance, a .dump command gives me (among many other lines) 
things like:

 INSERT INTO STATISTICS VALUES(11237795927160,11868);

while the output of .recover command gives me things this way:

 INSERT INTO 'STATISTICS'('_rowid_', STATDATE, DISKUSED) VALUES( 1, 
11237795927160, 11868 );

I'm wondering why these differences in the way to construct the instructions to 
rebuild a sound database instance. What are the (probably rightful) motivations?

1) Why 'STATISTICS' (and not STATISTICS as in .dump command)? If escaping 
wanted, why not double quotes instead of single quotes?
2) Why do the insert statement prefer to name and repeat, ad nausea, the column 
names on each insert when, apparently, the shortcut syntax capitalizing on the 
known column order in the schema might seem much less verbose?

On the real DB I quickly tested .recover on (with no reason, I have nothing to 
recover, just testing the feature) I had an issue while rebuilding a new DB 
from the script made by .recover. I got foreign key constraint failures (which 
I have not yet traced exactly).

sqlite> .once system.sql
sqlite> .recover

sqlite3 recover.db
sqlite> .read system.sql
Error: near line 14658: FOREIGN KEY constraint failed
Error: near line 14659: FOREIGN KEY constraint failed
Error: near line 14660: FOREIGN KEY constraint failed


Thanks for the report. Looks like the .dump command adds "PRAGMA 
foreign_keys=OFF;" to the output to avoid this. .recover now does this too:


https://sqlite.org/src/info/bfc29e62eff0ed00

Dan.




sqlite> .q

While doing the same kind of work around .dump worked nicely:

sqlite> .once systemd.sql
sqlite> .dump

sqlite3 dump.db
sqlite> .read systemd.sql
sqlite> .q

The source test db passes successfully those tests:

sqlite> pragma integrity_check;
integrity_check
ok
sqlite> pragma foreign_key_check;
sqlite> .dbconfig
enable_fkey on
 enable_trigger on
 fts3_tokenizer off
 load_extension on
   no_ckpt_on_close off
enable_qpsg off
trigger_eqp off
 reset_database off
  defensive off
writable_schema off
legacy_alter_table off
dqs_dml off
dqs_ddl off

Again, I have no recovery to attempt for now. I was just exercising the 
.recover feature for learning, using a db I'm not suspecting of anything.

—
Best Regards, Meilleures salutations, Met vriendelijke groeten, Mit besten 
Grüßen,
Olivier Mascia


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] mode insert dumps

2019-08-02 Thread Dan Kennedy


On 2/8/62 15:04, Luca Ferrari wrote:

Hi,
I've got a lot of database files that I would like to dump using
INSERT statements.
unluckily, .mode insert does not work for me, or I'm not able to
understand how it works.

sqlite> select mude_anno, mude_numero from catdaemo;
INSERT INTO table VALUES(2019,1161);

My questions are:
1) why is the insert operating against "table" instead of the real
table I was selecting from (catdaemo)?


You can add a table name to the ".mode insert":

  sqlite3> .mode insert catdaemo


2) is there a way to force the INSER to have also the columns I
selected listed? I would like something like:
INSERT INTO catadaemo( mude_anno, mude_numero ) VALUES(2019,1161);


Try doing:

  sqlite3> .headers on

along with the ".mode insert" command.

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Large database backup

2019-08-01 Thread Dan Kennedy


On 1/8/62 19:49, Tammisalo Toni wrote:

Hi!

I have an application which is using sqlite database in WAL mode. There is a 
need for
periodic backups to a remote site without obstructing the normal operation. 
Both read
and write access is required during the backup. At the moment I have system 
which
first blocks checkpoints as otherwise backup was restarted too often. Backup is 
done
to remote database implemented with sqlite vfs layer.

What happens in my tests is that all sqlite3_backup_step()'s complete without 
actually
writing anything to remote database. Only during last step all pages are 
written. This would
be annoying from progress monitoring point of view. However, as database mutex 
is held
during this time it actually blocks all access to the database for a long 
period of time.
Changing the backup step size does not help as all this happens at last step 
regardless.

So, is this a bug? I'm I doing something wrong? What I was hoping was that 
backup would
not hold database mutex while it is writing to the other database or at least 
allow splitting the
work with sqlite3_backup_step() so that at least some work could be done while 
backup is
in progress. I actually have strong impression that this worked better with 
some older sqlite
version. Currently using 3.27.2.

Relevant part of the stack trace:

#8  0x005dc870 in sqlite3OsWrite (id=0x7fc1a4120f98, pBuf=0x7fc1a47b0e88, 
amt=, offset=)
 at sqlite3.c:9
#9  pager_write_pagelist (pPager=0x7fc1a41216f8, pList=0x7fc1a47c0ec0) at 
sqlite3.c:54971
#10 0x005bb1a5 in sqlite3PagerCommitPhaseOne (pPager=0x7fc1a41216f8, 
zMaster=0x0, noSync=0)
 at sqlite3.c:57050
#11 0x005b968f in sqlite3_backup_step (p=0x7fc1a4056658, nPage=) at sqlite3.c:74033

Seems that all database pages are written out in pager_write_pagelist() in 
single loop.



The backup process writes through the cache of the destination database. 
So data is only written to disk when either (a) the cache is full or (b) 
the transaction is committed by the last sqlite3_backup_step() call. If 
you reduce the size of the cache used by the destination db writing 
should begin in an earlier sqlite3_backup_step() call.


Dan.




Also, I'm open to other suggestions. I was contemplating to just copy the 
database file
directly while WAL checkpoints are not done but I read some comments that 
suggested
that it would not be a safe to do that.

Any help or suggestions would be appreciated!

   Toni Tammisalo
   ttamm...@iki.fi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: crash when close blob handle after close_v2 db

2019-07-15 Thread Dan Kennedy


On 14/7/62 17:18, Chaoji Li wrote:

This problem is only present for 3.28+. A sample test case is attached.


Thanks for reporting this. We think it's fixed here:

  https://sqlite.org/src/info/52f463d29407fad6

The mailing list stripped off your test case, so if you could either run 
it with the latest SQLite from fossil or else post it inline here so 
that we can run it, that would be very helpful.


Thanks,

Dan.





Basically, the flow is:

1. Open  in-memory db A (we don't do anything about it).
2. Open db B  from file test.db
3. Create a blob handle from B
4. close_v2 A
5. close_v2 B
6. close blob handle -> Segmentation fault

The problem seems to go away if A is not created.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Safe saving of in-memory db to disk file

2019-07-15 Thread Dan Kennedy


On 15/7/62 00:05, ardi wrote:

Hi!

I'm going to use sqlite as means of the file format I/O for
applications. One of the critical points in file I/O is saving the
file in a safe way, so that data loss cannot happen (or at least the
risk of happening is minimized as much as possible). Traditionally,
some applications save their files with a temporary name, so that in
the case of system failure you don't lose the old file and the new
file at the same time, and then, when the file saving is finished, the
old file is deleted, and the temporary one is renamed to replace it.

I have read the backup API page (https://www.sqlite.org/backup.html)
that shows how to read a sqlite db from disk to memory, and how to
save it back to disk, but it doesn't talk about the topic of
performing the save in a safe way.


It's safe by default.

When you use the online backup API, the destination is written using an 
SQLite transaction. So if your app or the system crashes before the 
backup is complete, the transaction is rolled back following recovery.


Dan.




Do you have any recommendation for saving the inmemory db in a safe
way? (by "safe" I mean I don't want the to lose both the old db file
and the inmemory one --however losing the inmemory db would be
reasonable, as it's what obviously happens in a power outage if you
didn't save before).

Would you do it with the sqlite API, or with the OS system calls?

Another scenario of interest would be if the db is really huge and you
consider the possibility of not overwriting the whole old file, but
just committing the changes, in order to save unnecessary disk writes.
The FAQ explains about atomic sqlite writes into the db that also
prevent data loss... but... can you do that with an inmemory db? how?

Thanks in advance!!

ardi
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Link errors with SQLITE_OMIT_VIRTUALTABLE

2019-07-15 Thread Dan Kennedy


On 14/7/62 15:59, Orgad Shaneh wrote:

Hi,

In reply to 
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg113512.html.

Can you please accept this patch?


The patch doesn't seem all that intrusive, but is there a reason you 
can't build from canonical sources instead of sqlite3.c?


If you build from the full source package with SQLITE_OMIT_VIRTUALTABLE 
defined things work as is. And you will save a bit more code space than 
just building sqlite3.c with the same option.


  https://www.sqlite.org/compile.html#_options_to_omit_features

Dan.





Thanks,
- Orgad
This email and any files transmitted with it are confidential material. They 
are intended solely for the use of the designated individual or entity to whom 
they are addressed. If the reader of this message is not the intended 
recipient, you are hereby notified that any dissemination, use, distribution or 
copying of this communication is strictly prohibited and may be unlawful.

If you have received this email in error please immediately notify the sender 
and delete or destroy any copy of this message
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_close() drops wal and shm files despite of other processes holding DB open

2019-07-12 Thread Dan Kennedy


On 11/7/62 23:07, Andreas Kretzer wrote:

I'm using SQLITE3 (V3.29.0) on an arm embedded linux (2.6.39) on an ext3
filesystem.

Several processes hold the DB open and the "-wal" and "-shm" files exist.
if I use 'lsof | fgrep ' I can see all processes having all
three
files open. At least one of the processes uses threads, but every process
has just one single DB connection active which is shared among all threads.

The compilation of sqlite3 is done with multithreading in mind:

 sqlite> pragma compile_options;
 COMPILER=gcc-6.2.0
 ENABLE_DBSTAT_VTAB
 ENABLE_FTS4
 ENABLE_JSON1
 ENABLE_RTREE
 ENABLE_STAT3
 ENABLE_STMTVTAB
 ENABLE_UNKNOWN_SQL_FUNCTION
 ENABLE_UPDATE_DELETE_LIMIT
 HAVE_ISNAN
 THREADSAFE=1

I can check, that the database is threadsafe (mode == 1) and is switched
to WAL-mode.

So far I never noticed any problems dealing with concurrent updates or so.
The only thing (tested in depth with V3.15.2 and V3.29.0) is when one
process stops and closes the database using sqlite3_close(). This may even
be the sqlite3 CLI. That process closes DB (lsof shows that this process has
closed its filedescriptors and is not in the listing anymore). Right at the
next write access to the DB in the still running process (at least I
think that
this is exactly the point) the "-wal" and "-shm" files are removed.
The sqlite3_exec() function still returns SQLITE3_OK on all following
actions,
but 'lsof' reports, that this process has opened the "-wal" and "-shm"
files,
but marked as "deleted". And they are really deleted and none of the
upcoming
DB changes will ever reach the real DB.

What is wrong? I already checked, that my kernel supports POSIX file locking
(CONFIG_FILE_LOCKING=yes). What else can I check? Two or more sqlite3 CLI
processes started in parallel don't exhibit this behavior.



Does lsof show that your app has a read-lock on the database file (not 
the *-wal or *-shm files) just before this happens?


Are you executing any PRAGMA statements in the app? "PRAGMA 
locking_mode=none" for example?


Or are you opening/closing the database file directly at all (separate 
from SQLite), causing SQLite's locks to be dropped by this POSIX quirk?


https://sqlite.org/howtocorrupt.html#_posix_advisory_locks_canceled_by_a_separate_thread_doing_close_

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Incorrect results after upgrading to 3.28.0

2019-06-26 Thread Dan Kennedy
Thanks for reporting this one. It is a bug. Now fixed here:

  https://sqlite.org/src/info/5fd20e09a522b62a

Ticket:

  https://sqlite.org/src/info/9cdc5c46

Dan.


> We recently updated from version 3.26.0 to version 3.28.0. Now we're seeing
> different (incorrect) results for the following query. Our database is
> available here:
> https://github.com/aspnet/EntityFrameworkCore/raw/f386095005e46ea3aa4d677e4439cdac113dbfb1/test/EFCore.Sqlite.FunctionalTests/northwind.db

> SELECT (
>SELECT "t"."CustomerID"
>FROM (
>SELECT "o"."CustomerID", "o"."OrderDate"
>FROM "Orders" AS "o"
>WHERE ("c"."CustomerID" = "o"."CustomerID") AND "o"."CustomerID" IS NOT
> NULL
>   ORDER BY "o"."CustomerID", "o"."OrderDate" DESC
>LIMIT 2
>) AS "t"
>ORDER BY "t"."CustomerID", "t"."OrderDate" DESC
>LIMIT 1)
> FROM "Customers" AS "c";
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] CVE-2019-5018 patch

2019-06-18 Thread Dan Kennedy


On 18/6/62 19:48, Riccardo Schirone wrote:

Hi,

What is exactly the patch for CVE-2019-5018? I could not see it anywhere in
the release changelog nor in the Cisco Talos report. It would be useful for
distributions that need to backport the fix.


It was this one:

https://sqlite.org/src/info/1e16d3e8fc60d39c

Dan.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] json path escaping with double quote

2019-06-17 Thread Dan Kennedy


On 13/6/62 23:45, gwenn wrote:

Hello,
With the json1 extension, we can escape special characters like '['
from being interpreted as an array index by wrapping the path in
double quotes. But sometimes, it does not work:


Thanks for reporting this. Now fixed here:

  https://sqlite.org/src/info/45bfcb88e71451a6

Dan.





sqlite> CREATE TABLE test (data TEXT);
sqlite> INSERT INTO test (data) VALUES ('{}');
sqlite> UPDATE test SET data = json_set(data, '$."equity_spot[at_andr]"', json(3
2));
sqlite> SELECT json_each.key, json_each.value FROM test, json_each(test.data);
"equity_spot[at_andr]"|32
sqlite> -- KO: expected equity_spot[at_andr]|32 but got
"equity_spot[at_andr]"|32
sqlite> DELETE FROM test;
sqlite> INSERT INTO test (data) VALUES ('{"equity_spot[at_andr]":34.3}');
sqlite> UPDATE test SET data = json_set(data, '$."equity_spot[at_andr]"', json(3
2));
sqlite> SELECT json_each.key, json_each.value FROM test, json_each(test.data);
equity_spot[at_andr]|32
sqlite> -- OK: no double quote

I use json_patch as a workaround.
Is this the expected behaviour ?
Thanks.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] demoRead() function in test_demovfs.c not consistent with documentation

2019-06-15 Thread Dan Kennedy


On 15/6/62 20:08, Dominique Pellé wrote:

Hi

SQLite page https://sqlite.org/c3ref/io_methods.html says
at the bottom:

=== BEGIN QUOTE ===
If xRead() returns SQLITE_IOERR_SHORT_READ it must
also fill in the unread portions of the buffer with zeros. A VFS
that fails to zero-fill short reads might seem to work.
However, failure to zero-fill short reads will eventually lead
to database corruption.
=== END QUOTE ===

Yet, I see that function demoRead() in the demo
VFS example at https://www.sqlite.org/src/doc/trunk/src/test_demovfs.c
does not zero-fill the buffer in case of SQLITE_IOERR_SHORT_READ.

It looks like a bug in the demo code, or is the
SQLite documentation incorrect?



Hi Dominique

I think it's a bug in the demo code. Fixed, along with a problem in the 
xFileControl method, here:


  https://sqlite.org/src/info/ca4ddfefc1be1afb

I'm not sure that breaking this rule can actually lead to database 
corruption, except in the trivial case where the database is 0 bytes in 
size at the start of the transaction. But it's hard to be 100% sure of 
that, and quite impossible to be sure that SQLite won't change to take 
advantage of this part of the specification in the future. So I guess 
new VFS implementations should zero memory when returning 
SQLITE_IOERR_SHORT_READ.


Regards,

Dan.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] configure failure for aarch64

2019-06-07 Thread Dan Kennedy


On 7/6/62 20:17, Beat Küng wrote:

Hi

I'm trying to build sqlite (the latest release 3.28.0) for an aarch64
build target and get the following error:

checking build system type...
./config.guess: unable to guess system type
This script, last modified 2007-07-22, has failed to recognize
the operating system you are using. It is advised that you
download the most up to date version of the config scripts from
  
http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.guess

and
  
http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/config/config/config.sub

If the version you run (./config.guess) is already up to date, please
send the following data and any information you think might be
pertinent to  in order to provide the needed
information to handle your system.
config.guess timestamp = 2007-07-22
uname -m = aarch64
uname -r = 4.15.0-45-generic
uname -s = Linux
uname -v = #48~16.04.1 SMP Fri Feb 8 10:38:11 UTC 2019
/usr/bin/uname -p =
/bin/uname -X =
hostinfo   =
/bin/universe  =
/usr/bin/arch -k   =
/bin/arch  =
/usr/bin/oslevel   =
/usr/convex/getsysinfo =
UNAME_MACHINE = aarch64
UNAME_RELEASE = 4.15.0-45-generic
UNAME_SYSTEM  = Linux
UNAME_VERSION = #48~16.04.1 SMP Fri Feb 8 10:38:11 UTC 2019


Just downloading the latest config.guess via
wget
'http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD'
-O config.guess
solves the issue.

Can you update that file in the source tree?


Now updated here:

  https://sqlite.org/src/info/be8438133f6cd87e


Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compile time error when compiling extension shathree.c together with sqlite3.c for Android

2019-06-07 Thread Dan Kennedy


On 7/6/62 20:23, Ulrich Telle wrote:

When I append the source code of the SHA3 extension (shathree.c from
ext/misc directory) to the SQLite3 amalgamation (sqlite3.c), and then try to
compile it for Android, I get the following error message:

shathree.c:83:7: error: expected identifier or '('
   u64 B0, B1, B2, B3, B4;
   ^
../toolchains/llvm/prebuilt/windows/sysroot/usr/include\asm-generic/termbits.h
:118:12: note:
   expanded from macro 'B0'
#define B0 000


Are you using up to date source code? It looks like this was fixed back 
in 2017:


  https://sqlite.org/src/info/3ec7371161

Dan.




^

As far as I can tell header file termbits.h is pulled in as a side effect of
including  (which is one of the "standard include files" used in
sqlite3.c).

As a workaround I could add a preprocessor check

#ifdef B0
   #undef B0
#endif

in front of the source code in shathree.c. However, I would prefer a general
solution.

Would it be possible for the SQLite developers to adjust the variable names
used in the extension shathree.c to avoid this name clash with the macro in
termbits.h? Or what else would be the recommended method to handle this
issue?

Regards,

Ulrich
--
E-Mail privat:  ulrich.te...@gmx.de
World Wide Web: http://www.telle-online.de


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3.c

2019-06-04 Thread Dan Kennedy


On 4/6/62 12:11, bhandari_nikhil wrote:

Thanks Dan. I had checked the database integrity using the following command:

sqlite3 myfile.db "PRAGMA integrity_check;"



Try "INSERT INTO ft(ft) VALUES('integrity-check')", where "ft" is the 
name of the fts5 table.



And it had reported ok. I will see if I can share the database file here.
Can you let me know how to check the db file (in case I am not able to share
the db file here) ? And how the fts5 can get corrupted ?


The easiest explanation is that the fts5 tables were modified directly, 
bypassing fts5. Or there could be a bug in fts5 - a bug that may or may not 
still be present; there have been fixes since 3.14. A memory related bug in the 
application could also cause this.
 


To run the rebuild command, the ft refers to the db name ?


The fts5 table name.

Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in sqlite3.c

2019-06-03 Thread Dan Kennedy


On 3/6/62 12:51, bhandari_nikhil wrote:

  I am facing a crash in sqlite3fts5BufferAppendBlob. Following is the
backtrace:

#0  sqlite3Fts5BufferAppendBlob (pRc=pRc@entry=0xf54139c8,
pBuf=pBuf@entry=0xf5afeb90, nData=4294967295, pData=0xf49fff76 "90246ture")
at sqlite3.c:180474

#1  0xf717b6f8 in fts5WriteAppendTerm (p=p@entry=0xf54139a8,
pWriter=pWriter@entry=0xf5afeb84, nTerm=5, pTerm=0xf49fff70
"06280290246ture") at sqlite3.c:188868

#2  0xf717bf29 in fts5IndexMergeLevel (p=p@entry=0xf54139a8,
ppStruct=ppStruct@entry=0xf5afec3c, iLvl=3, pnRem=0xf5afec38) at
sqlite3.c:189176

There is an apparent bug in sqlite3fts5BufferAppendBlob where it is
asserting for check on nData < 0 but nData is actually u32. The nData should
be int, not u32. I am using version 3.14.0.100 but the bug is present in the
latest version as well.

Also, if you notice in frame #0, the nData passed is 0x which is -1.
It was calculated to be -1 in frame #1 where it did nTerm - nPrefix. The
nPrefix value came out to be 6 and nTerm was 5. I want to know when this
nPrefix becomes > nTerm ?


Thanks for reporting this.

I think that can only happen if the FTS5 records stored in the database 
are corrupt. If you are able to share the database I can check for you. 
You can probably repair the index using the following:


  https://sqlite.org/fts5.html#the_rebuild_command

Also, I would have thought this crash would have been fixed by this 
change, which is in 3.28.0:


  https://sqlite.org/src/info/673a7dd698

Have you demonstrated the crash with the latest version, or just 
eyeballed the code?


Cheers then,

Dan.







Regards
Nikhil Bhandari



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Session extension questions

2019-06-03 Thread Dan Kennedy


On 31/5/62 04:54, Sebastien Boisvert wrote:

I've got a couple of general questions about the session extension, 
specifically about applying changesets/patchsets:

- Assuming little to no conflicts, what's the performance of applying a 
changeset to a database; let's say for 1 (or multiples thereof) changes 
across multiple tables. Would it be significantly faster than a best-case 
INSERT/UPDATES of the same amount of changes?



It uses prepared statements for all operations, so it will be faster 
than running an SQL script for that reason, but no faster than any other 
app that uses prepared statements.




- While a changeset is being applied, is the whole database locked? Only some 
tables?


Whole database, same as for any other SQLite transaction.


- Does applying a changeset trigger any triggers a table might have, and if so 
when does that happen (during/after)?



As each row is inserted/updated/deleted. Before or after the operation 
depending on whether the trigger is declared as BEFORE or AFTER.


Dan.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LSM Extension Performance (from sqlite3 tree)

2019-05-16 Thread Dan Kennedy


On 16/5/62 18:47, Amirouche Boubekki wrote:

Hello,


I am considering using lsm extension for a project. I a did a few benchmark
with the following configuration:

LSM_CONFIG_AUTOFLUSH 1048576
LSM_CONFIG_BLOCK_SIZE 65536
LSM_CONFIG_AUTOWORK 1
LSM_CONFIG_MMAP 0
LSM_CONFIG_MULTIPLE_PROCESS 0

I am trying to load 4.5GB of data. The database errors with BUSY error at
some point.

Also, the memory consumption grows unbound even if I diseabled MMAP.

Here is some stats:

$ ls -larh wt/foob.ar.lsm.sqlite*

-rw-r--r-- 1 amirouche amirouche 618M May 16 13:39 wt/foob.ar.lsm.sqlite-log
-rw-r--r-- 1 amirouche amirouche 468M May 16 13:39 wt/foob.ar.lsm.sqlite

And I attached a memory plot.

opening and closing the database at each transaction help, but still the
memory grows.

What I am doing wrong?


The mailing list has discarded your attachment. But what we need most to 
try to diagnose the memory leak is the code you are using. If we can 
actually run it and reproduce the memory leak, so much the better.


Dan.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Partial NOT NULL index malfunctions with IS NOT/!=

2019-05-11 Thread Dan Kennedy


On 11/5/62 16:54, Manuel Rigger wrote:

Hi everyone,

I found another test case that demonstrates a malfunctioning index:

CREATE TABLE IF NOT EXISTS t0 (c0);
CREATE INDEX IF NOT EXISTS i0 ON t0(1) WHERE c0 NOT NULL;
INSERT INTO t0(c0) VALUES(NULL);
SELECT * FROM t0 WHERE t0.c0 IS NOT 1; -- returns no row



Thanks for this. Should be fixed now. Ticket here:

  https://sqlite.org/src/tktview/80256748471a01

Dan.




If the index is created, no rows are fetched. Without the index, the NULL
row is returned.

I think that this looks like a rather general pattern that could be used in
practice.

Best,
Manuel
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER TABLE fails when renaming an INTEGER PRIMARY KEY column in a WITHOUT ROWID table

2019-05-06 Thread Dan Kennedy


On 6/5/62 16:42, Manuel Rigger wrote:

Hi everyone,

the following example fails with an error "no such column: c0":

CREATE TABLE t0 (c0 INTEGER, PRIMARY KEY (c0)) WITHOUT ROWID;
ALTER TABLE t0 RENAME COLUMN c0 TO c1;


Thanks again for the bug reports. This one is now fixed here:

https://sqlite.org/src/info/91f701d39852ef1ddb29

Dan.





However, specifying c0 as the PRIMARY KEY in the column definition rather
than in a table constraint seems to work:

CREATE TABLE t0 (c0 INTEGER PRIMARY KEY) WITHOUT ROWID;
ALTER TABLE t0 RENAME COLUMN c0 TO c1;

Best,
Manuel
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .expert output help

2019-04-26 Thread Dan Kennedy


On 26/4/62 23:56, Jose Isaias Cabrera wrote:

this query. If you try ".expert -verbose", it will tell you the other

This is the output from --verbose

sqlite> .expert --verbose
sqlite>  SELECT a.*,sum(b.AnnualDossier) as Dossier FROM Project_List AS a
...>  LEFT JOIN Project_ABT_Budget AS b ON a.ProjID = b.ProjID
...>  AND
...>  b.InsertDate =
...>  (SELECT MAX(InsertDate) FROM Project_ABT_Budget WHERE b.ProjID = 
ProjID)
...>  WHERE a.ProjID IN
...>  (
...>  SELECT a.ProjID FROM Project_List WHERE 1=1
...>  AND lower(a.Manager) LIKE '%diggs%'
...>  ) AND a.InsertDate =
...> (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID = a.ProjID)
...>
...>  GROUP BY a.ProjID;
-- Candidates -
(null)
...

Maybe Candidates should say something else other than (null).


That means it couldn't even come up with anything to try - you already 
have indexes for all WHERE constraints an ORDER/GROUP BY terms in the 
query. It should probably say "(no candidates found)", or something 
along those lines.


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .expert output help

2019-04-26 Thread Dan Kennedy


On 26/4/62 21:30, Jose Isaias Cabrera wrote:

Greetings.

sqlite> .expert
sqlite>  SELECT a.*,sum(b.AnnualDossier) as Dossier FROM Project_List AS a
...>  LEFT JOIN Project_ABT_Budget AS b ON a.ProjID = b.ProjID
...>  AND
...>  b.InsertDate =
...>  (SELECT MAX(InsertDate) FROM Project_ABT_Budget WHERE b.ProjID = 
ProjID)
...>  WHERE a.ProjID IN
...>  (
...>  SELECT a.ProjID FROM Project_List WHERE 1=1
...>  AND lower(a.Manager) LIKE '%jic%'
...>  ) AND a.InsertDate =
...> (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID = a.ProjID)
...>
...>  GROUP BY a.ProjID;
(no new indexes)

SCAN TABLE Project_List AS a USING INDEX ProjID_InsertDate
CORRELATED LIST SUBQUERY 2
SCAN TABLE Project_List USING COVERING INDEX Project_Name
CORRELATED SCALAR SUBQUERY 3
SEARCH TABLE Project_List USING COVERING INDEX ProjID_InsertDate (ProjID=?)
SEARCH TABLE Project_ABT_Budget AS b USING COVERING INDEX 
sqlite_autoindex_Project_ABT_Budget_1 (ProjID=?)
CORRELATED SCALAR SUBQUERY 1
SEARCH TABLE Project_ABT_Budget USING COVERING INDEX 
PAB_ProjIDInsertDateProjIDInsertDate (ProjID=?)

I have a few questions for the .experts :-),



Hey! Somebody tried it out! Thanks! :)


-- On line 5 of the resulted output ,

SCAN TABLE Project_List USING COVERING INDEX Project_Name

Why is it scanning the table using that INDEX if there is no "Project_Name" 
referenced in the query? That is one of the fields of Project_List, but it is not being 
used now.


There's either an error or a strange construction created by a program 
in this part of the query:


 WHERE a.ProjID IN
   ...>  (
   ...>  SELECT a.ProjID FROM Project_List WHERE 1=1
   ...>  AND lower(a.Manager) LIKE '%jic%'
   ...>  )


The sub-query scans table "Project_List", but doesn't use any columns 
from it (both a.ProjID and a.Manager are from the outer query). So, to 
do the scan, SQLite is choosing the index it thinks will require the 
minimum IO. i.e. one on very few fields.



-- How can I create the INDEX on line 8 that sqlite_autoindex created?


You have already done so. Index "sqlite_autoindex_Project_ABT_Budget_1" 
is actually a PRIMARY KEY or UNIQUE constraint within the definition of 
table "Project_ABT_Budget".


In this case, the key piece of output is "(no new indexes)". This means 
that ".expert" thinks you have already created the optimal indexes for 
this query. If you try ".expert -verbose", it will tell you the other 
indexes it considered. Or, if you run it on a version of your db that 
has no indexes at all, you can see that it will (hopefully!) recommend 
indexes equivalent to those you already have.


For posts like these, unless you're constrained by company rules or some 
other similar consideration, it's good to include the entire database 
schema (use the shell tool's ".schema" command). It makes it easier to 
answer questions regarding specific queries and allows people to 
recreate your experiment themselves if they wish to investigate further.


Dan.




Thanks for your knowledge sharing.

josé

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Will ROWID be required to be unique in contentless Full text search

2019-04-25 Thread Dan Kennedy


On 25/4/62 07:58, paul tracy wrote:

My question is whether or not I can rely on my use of non-unique rowids in FTS5 
contentless tables to remain unenforced into the future of SQLite or am I 
exploiting an oversight that may be resolved in the future?
Details ...
I have a key-value pair SQLite table (called props) used for storing properties 
of objects. It has the following columns: objID, propCode, propVal
objID + propCode is my primary key.

I have created a contentless FTS5 virtual table as follows: create virtual 
table fti using FTS5(ftiCode, ftiVal, content='');
for every record in props table a corresponding insert is made to the fti 
table, I insert the props.objID into fti.rowid.
Since any given objID in my props table can have multiple props I wind up 
making multiple inserts into the fti table with a non-unique rowid (since I'm 
using rowid to store the objid). This works perfectly fine in all use cases so 
my app runs perfectly.
All of the docs on rowid say that it is unique but in the FTS5 contentless 
table this unique requirement is not enforced. Since I have to provide all 3 of 
the initial fields whenever a delete is required then this also works to remove 
only the exact record from FTI. I view the rowid in the FTS contentless table 
as simply holding whatever value I send to it and it will return that value in 
a field named rowid but that field isn't really a rowid in the true sense. If 
this interpretation will remain valid into the future then my implementation is 
perfect.

For the curious: Why am I doing this instead of something more traditional? 
Mostly it has to do with elimination of duplicate returns from the FTS5 search. 
But also contentless to save space. When I query the FT index I am interested 
in which objects contain given property values. My table has millions of rows 
so I'm sensitive to space and speed.
If I insert the following rows into fti rowid, ftiCode, ftiValue:
3, 7, A3, 8, B
and then I ... SELECT rowid FROM FTI where FTI MATCH A OR B … I only get one 
row returned with rowid of 3. This was a surprise but was exactly what I 
wanted. A more traditional approach to using FTI would have resulted in two 
rows returned both with the same ID field (object ID in my case). By exploiting 
the fact that the FTS5 does not enforce unique rowids I save myself the expense 
of using DISTINCT.


Well, I can tell you that a single query on an FTS5 will never return 
two rows with the same rowid. Or, more accurately, if it does it 
indicates a bug that would affect all users, not just folks doing 
unusual things. That's just the nature of the data structure.


Other things:

I think you should add the "columnsize=0" option to the CREATE VIRTUAL 
TABLE statement. Without this option FTS5 creates a table that maps from 
rowid to the size in tokens of each column in the associated row (in 
your case, it will be named "fti_docsize"). This isn't going to work for 
you - as the second time you insert a row with rowid=3 the size values 
will clobber the values set when you inserted the first row with 
rowid=3. It's only apps using specific ranking functions that benefit 
from this extra data structure, so you can probably live without it:


https://sqlite.org/fts5.html#the_columnsize_option

You might also add the "detail=none" option, so that entire CREATE 
VIRTUAL TABLE statement is:


  CREATE VIRTUAL TABLE fti USING fts5(ftiCode, ftiVal, content='', 
columnsize=0, detail=none);


This option means that instead of storing offset information, FTS5 just 
stores a list of rowids for each token. This might be better for you as 
the offset information isn't going to work quite right either. Say you do:


  INSERT INTO fti VALUES(3, 'B', 'A B');
  INSERT INTO fti VALUSE(3, 3, 'B');

The second insert will clobber the offset information for "rowid=3, 
token=B". So FTS5 will think that the only instance of "B" in the 
rowid=3 document is token 0 of column "ftiVal". It will forget all about 
the instances associated with the first insert of a rowid=3 row. But it 
will remember that the rowid=3 document contains an 'A' as the first 
token of column 'ftiVal'. If you set detail=none, then FTS5 won't store 
any offset information and you won't have a problem. Like columnsize=0, 
this option saves disk space too.


Good luck then,

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query Regression IN and Virtual Tables - followup

2019-04-05 Thread Dan Kennedy


On 5/4/62 16:44, Hick Gunter wrote:

I patched my SQlite 3.24 code to include the fix from the ticket

<   if( rc==SQLITE_OK && (mBest = (pNew->prereq & ~mPrereq))!=0 ){
---

// from SQLite bugfix
  if( rc==SQLITE_OK && ((mBest = (pNew->prereq & ~mPrereq))!=0 || bIn) ){

and changed the xBestIndex return value to be lower if the equality constraint 
from IN is not usable

The generated code as reported is invalid (instruction 16 with the init of R6 
is not shown)



So, after applying the patch to 3.24 you executed the EXPLAIN statement 
shown below in the shell tool and it mysteriously omitted instruction 16 
from the output?


Are there any other problems? Does the SQL statement return the correct 
results if you execute it without the EXPLAIN?


Dan.






explain select lsn from atx_txlog where period_no between 7300 and 7313 and 
event_Type in (140001,180001);

addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 16000  Start at 16
1 VOpen  0 0 0 vtab:B90B5000
2 Explain2 0 0 SCAN TABLE atx_txlog VIRTUAL TABLE INDEX 
1:
   00
3 Integer7300  3 000  r[3]=7300
4 Integer7313  4 000  r[4]=7313
5 Integer1 1 000  r[1]=1
6 Integer2 2 000  r[2]=2
7 VFilter0 151
   00  iplan=r[1] zplan='
'
8   Noop   0 0 000  begin IN expr
9   VColumn0 15500  r[5]=vcolumn(15); 
atx_txlog.event_type
10  Eq 5 126 (BINARY)   43  if r[6]==r[5] goto 
12
11  Ne 5 147 (BINARY)   53  if r[7]!=r[5] goto 
14; end IN expr
12  VColumn0 21800  r[8]=vcolumn(21); 
atx_txlog.lsn
13  ResultRow  8 1 000  output=r[8]
14VNext  0 8 000
15Halt   0 0 000
17Integer180001  7 000  r[7]=180001
18Goto   0 1 000

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Dan Kennedy
Gesendet: Freitag, 29. März 2019 14:30
An: sqlite-users@mailinglists.sqlite.org
Betreff: [EXTERNAL] Re: [sqlite] Query Regression IN and Virtual Tables


On 29/3/62 14:32, Hick Gunter wrote:

When upgrading from 3.7.14.1 to 3.24 I noticed the following problem

Given a virtual table like

CREATE VIRTUAL TABLE vt ( key1 INTEGER, key2 INTEGER, key3 INTEGER,
attr1 INTEGER,...);

whose xBestIndex function simulates (in unsupported syntax)

CREATE VIRTUAL INDEX vt_key ON vt (key1, key2, key3);

but also handles simple comparisons internally, the query

SELECT * FROM vt WHERE key1 =  AND key2 BETWEEN  AND
 AND attr1 IN ();


Thanks for reporting this.

What is supposed to happen in this scenario is that xBestIndex() is invoked 
once with all 4 constraints marked as usable. The IN(...) is represented as an 
SQLITE_INDEX_CONSTRAINT_EQ constraint. If the
xBestIndex() implementation elects to use the IN(...) operator, then
xBestIndex() is invoked a second time, this time with the IN(...) marked as not 
usable. SQLite evaluates both plans, considering the cost estimates provided by 
the virtual table implementation and its own estimate of the cardinality of the 
IN(...) operator. And chooses the most efficient plan overall.

There was a bug preventing the second call to xBestIndex() from being made in 
some circumstances - including for your query. Now fixed here:

https://sqlite.org/src/info/f5752517f590b37b

So if you upgrade to trunk, or else apply the patch linked above to 3.27.2, and 
the virtual table implementation provides relatively accurate cost estimates, 
SQLite should make an intelligent decision about which plan to use.

Dan.



SQLite 3.7.14.1 asks for (key1,'='), (key2,'>='), (key2,'<=') and
xBestIndex accepts all 3 constraints yielding query plan

- materialize IN  as anonymous ephemeral table
- search vt using index 1 (key1 = ? AND key2 >= ? AND key2 <= ?)
- retrieve column attr1
- search anonymous ephemeral table

i.e. perform a single partial table scan on vt and check attr1


SQLite3.24 asks for ...,(attr1,'=') and xBestIndex accepts all 4
constraints yielding

- materialize IN () as anonymous ephemeral table
- scan anonymous ephemeral table
- search vt using index 1 (key1 = ? AND key2 >= ? AND key2 <= ? AND
attr1 = ?)

i.e. perform a partial table scan of vt FOR EACH attr1, which is
slower by the cardinality of the IN list

Fortunately, CTEs come to the rescue:

WITH attrs (attr1) AS (VALUES <(l

Re: [sqlite] Row locking sqlite3

2019-03-29 Thread Dan Kennedy


On 28/3/62 01:04, Thomas Kurz wrote:

I wonder whether SQLite is treating each DELETE as a single transaction.  Could 
you try wrapping the main delete in BEGIN ... END and see whether that speeds 
up the cascaded DELETE ?  Would you be able to find timings (either in your 
code or in the command-line tool) and tell us whether it's the DELETE or the 
END which takes the time ?

Ok, well very interesting and I'd never have had this idea, but indeed it 
works: within a transaction, it takes only a few seconds. This is very 
surprising as to me, a single DELETE statement is nothing more than that: a 
single atomic operation which should automatically be treated as a transaction 
(auto-commit-mode).

*confused*



Me too. For the BEGIN/COMMIT version, you're counting the time spent in 
COMMIT as well, correct?


If this is repeatable, we'd be very interesting in figuring out what is 
going on.


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] is this possible

2019-03-29 Thread Dan Kennedy


On 29/3/62 03:00, Igor Tandetnik wrote:

On 3/28/2019 3:21 PM, Mark Wagner wrote:

Imagine I have these two tables and one view defining a join.

CREATE TABLE t (foo);
CREATE TABLE s (bar);
CREATE VIEW v as select * from t join s on (foo = q);


Surprisingly, this last statement succeeds. But if you then close the 
database and try to open it again, it'll fail with "no such column: 
q". So, don't do this - you are creating an unusable database file 
with corrupted schema.



The error doesn't occur unless you actually query the view though, correct?

Here, I get:

  $ ./sqlite3 x.db
  SQLite version 3.27.2 2019-02-25 16:06:06
  Enter ".help" for usage hints.
  sqlite> CREATE TABLE t (foo);
  sqlite> CREATE TABLE s (bar);
  sqlite> CREATE VIEW v as select * from t join s on (foo = q);
  sqlite>
  $ ./sqlite3 x.db
  SQLite version 3.27.2 2019-02-25 16:06:06
  Enter ".help" for usage hints.
  sqlite> SELECT * FROM t;
  sqlite> SELECT * FROM s;
  sqlite> SELECT * FROM v;
  Error: no such column: q


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query Regression IN and Virtual Tables

2019-03-29 Thread Dan Kennedy


On 29/3/62 14:32, Hick Gunter wrote:

When upgrading from 3.7.14.1 to 3.24 I noticed the following problem

Given a virtual table like

CREATE VIRTUAL TABLE vt ( key1 INTEGER, key2 INTEGER, key3 INTEGER, attr1 
INTEGER,...);

whose xBestIndex function simulates (in unsupported syntax)

CREATE VIRTUAL INDEX vt_key ON vt (key1, key2, key3);

but also handles simple comparisons internally, the query

SELECT * FROM vt WHERE key1 =  AND key2 BETWEEN  AND  AND attr1 IN 
();



Thanks for reporting this.

What is supposed to happen in this scenario is that xBestIndex() is 
invoked once with all 4 constraints marked as usable. The IN(...) is 
represented as an SQLITE_INDEX_CONSTRAINT_EQ constraint. If the 
xBestIndex() implementation elects to use the IN(...) operator, then 
xBestIndex() is invoked a second time, this time with the IN(...) marked 
as not usable. SQLite evaluates both plans, considering the cost 
estimates provided by the virtual table implementation and its own 
estimate of the cardinality of the IN(...) operator. And chooses the 
most efficient plan overall.


There was a bug preventing the second call to xBestIndex() from being 
made in some circumstances - including for your query. Now fixed here:


  https://sqlite.org/src/info/f5752517f590b37b

So if you upgrade to trunk, or else apply the patch linked above to 
3.27.2, and the virtual table implementation provides relatively 
accurate cost estimates, SQLite should make an intelligent decision 
about which plan to use.


Dan.




SQLite 3.7.14.1 asks for (key1,'='), (key2,'>='), (key2,'<=') and xBestIndex 
accepts all 3 constraints yielding query plan

- materialize IN  as anonymous ephemeral table
- search vt using index 1 (key1 = ? AND key2 >= ? AND key2 <= ?)
   - retrieve column attr1
   - search anonymous ephemeral table

i.e. perform a single partial table scan on vt and check attr1


SQLite3.24 asks for ...,(attr1,'=') and xBestIndex accepts all 4 constraints 
yielding

- materialize IN () as anonymous ephemeral table
- scan anonymous ephemeral table
   - search vt using index 1 (key1 = ? AND key2 >= ? AND key2 <= ? AND attr1 = 
?)

i.e. perform a partial table scan of vt FOR EACH attr1, which is slower by the 
cardinality of the IN list

Fortunately, CTEs come to the rescue:

WITH attrs (attr1) AS (VALUES <(list)>) SELECT * FROM vt CROSS JOIN attrs a ON (a.attr1 = 
vt.attr1) WHERE key1 =  AND key2 BETWEEN  AND 

This prevents SQLite 3.24 from adding the last constraint, yielding

- materialize IN (<(list)>) as epehemeral table attrs
- search vt using index 1 (key1 = ? AND key2 >= ? AND key2 <= ?)
   - retrieve column attr1
   - scan ephemeral table attrs

The only issue is that the previously generated ephemeral table was implemented 
as a covering index (allowing the sequence IfNull, Affinity, NotFound) whereas 
the named ephemeral table is implemented as a table (requiring a full table 
scan of the ephemeral table, even though at most 1 row can match)

Optimisation opportunity:

32  Rewind 1 40000
33Column 1 0 10   00  
r[10]=events.event_type
34VColumn0 1511   00  
r[11]=vcolumn(15); atx_txlog.event_type
35Ne 113810(BINARY)   53  if r[10]!=r[11] 
goto 38
36VColumn0 6 12   00  r[12]=vcolumn(6); 
atx_txlog.sync_offset
37ResultRow  121 000  output=r[12]
38  Next   1 33001

Could IMHO be rewritten as

32  VColumn0 1511   00  r[11]=vcolumn(15); 
atx_txlog.event_type
33  Rewind 1 40000
34Column 1 0 10   00  
r[10]=events.event_type
35Ne 113810(BINARY)   53  if r[10]!=r[11] 
goto 38
36VColumn0 6 12   00  r[12]=vcolumn(6); 
atx_txlog.sync_offset
37ResultRow  121 000  output=r[12]
38  Next   1 33001


___
  Gunter Hick | Software Engineer | Scientific Games International GmbH | 
Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O) +43 
1 80100 - 0

May be privileged. May be confidential. Please delete if not the addressee.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL grows without bounds, short concurrent writes & reads

2019-03-29 Thread Dan Kennedy


There's code here, if you want to experiment with it:

  https://sqlite.org/src/timeline?r=wal2

Docs:

  https://sqlite.org/src/artifact/a807405a05e19a49

Dan.



On 29/3/62 01:33, Florian Uekermann wrote:

Hi,

A very simple reproducer bash script using the sqlite3 CLI is appended 
at the end.


I am using WAL mode in a setting with sequential writes and many 
concurrent reads. Due to WAL mode the readers don't get blocked, which 
is great and since writes are sequential, they never get blocked either.
However, I am seeing the WAL grow on every write without ever 
restarting at the beginning of the file if there is a constant influx 
of new reads (with limited lifetime).
This causes the WAL file to grow to many GB within minutes, even if 
the database state fits into a few MB or even kB after closing all 
connections.


The output of "PRAGMA wal_checkpoint(PASSIVE);" usually looks like 
this: "0|123|123", which I interpret as the checkpointer being caught 
up on the current state. I believe the reason that new writes are 
appended at the end, instead of restarting the WAL, is that while 
reads are short-lived, there is always at least one going on, so the 
log of the last write has to be kept, which in turn prevents a reset 
of the WAL.


An example read (r) write (w) pattern could look like this (b: begin, 
e: end):

r1_b; w1; r2b; r1e; w2; r2b; w3; r3b; r2e ...

A solution could be to start a second WAL when the first one exceeds 
some size threshold, which would allow resetting the first one after 
all readers finish that started before the wal_checkpoint finished, 
even if there are new writes in the second WAL. Then the roles/order 
of the two WALs flipped, allowing the second WAL to be reset 
regardless of read/write frequency.
I believe that would limit the total WAL size to about 2 times of the 
size of writes happening within the timespan of a single read.


This solution has been suggested previously on this list by Mark 
Hamburg, but the threads lack a simple reproducer and detailed problem 
description.


Best regards,
Florian

Test script:

rm -f test.sqlite
./sqlite3 test.sqlite <<< "
PRAGMA journal_mode=WAL;
CREATE TABLE t (value INTEGER);
REPLACE into t (rowid, value) VALUES (1,0);
"

for i in {1..1000}
do

./sqlite3 test.sqlite <<< "
BEGIN;
SELECT value FROM t WHERE rowid=1;
.system sleep 0.2
SELECT value FROM t WHERE rowid=1;
COMMIT;
" &

sleep 0.1
./sqlite3 test.sqlite <<< "
BEGIN;
REPLACE into t (rowid, value) VALUES (1,$i);
.print inc
COMMIT;
"
wc -c test.sqlite-wal
done

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS5 Transaction Leads to OOB Read

2019-03-19 Thread Dan Kennedy


That's the error I got too. Now fixed here:

  https://sqlite.org/src/info/b3fa58dd7403dbd4

Dan.



On 19/3/62 04:22, Dominique Pellé wrote:

Chu  wrote:


The code:

```
CREATE VIRTUAL TABLE t1 USING fts5(content);

BEGIN;
 INSERT INTO t1 (content) VALUES('');
 SELECT * FROM 
t1('*');
END;
```

As you can see, it creates a virtual table with fts5, and run a transaction on 
it, this will leads to a OOB READ. The ASAN report:

```
➜  sqlite-crashes ../sqlite-autoconf-3270200/sqlite3 < 2-oob-read.sql
=
==21007==ERROR: AddressSanitizer: heap-buffer-overflow on address 
0x60d02898 at pc 0x7f0cad16e6a3 bp 0x7ffdc88ddc80 sp 0x7ffdc88dd430
READ of size 81 at 0x60d02898 thread T0
 #0 0x7f0cad16e6a2  (/lib/x86_64-linux-gnu/libasan.so.5+0xb86a2)
 #1 0x563324ca4013 in fts5HashEntrySort 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:207762
 #2 0x563324e685f9 in sqlite3Fts5HashScanInit 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:207820
 #3 0x563324e685f9 in fts5SegIterHashInit 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:210321

...snip..

Just to confirm that there is a bug when running your
queries in the SQLite-3.27.2 shell with valgrind.

I did not get a heap overflow, but valgrind complains
about uninitialized memory in the same fts5HashEntrySort
function as in your stack:

$ valgrind --track-origins=yes --num-callers=50 sqlite3_shell
==10856== Memcheck, a memory error detector
==10856== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==10856== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==10856== Command: ./Output/Binary/x86_64-Linux-clang/Debug/bin/sqlite3_shell
==10856==
SQLite version 3.27.2 2019-02-25 16:06:06
NDSeV devkit 3.27.2.1 2019-02-26 16:04:39 990c4f90c3340db5
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE VIRTUAL TABLE t1 USING fts5(content);
sqlite> BEGIN;
sqlite> INSERT INTO t1 (content) VALUES('');
sqlite> SELECT * FROM
t1('*');
==10856== Conditional jump or move depends on uninitialised value(s)
==10856==at 0x4C362B2: __memcmp_sse4_1 (vg_replace_strmem.c:)
==10856==by 0x4ECC86: fts5HashEntrySort (nds_sqlite3.c:207811)
==10856==by 0x4EC55D: sqlite3Fts5HashScanInit (nds_sqlite3.c:207869)
==10856==by 0x4EBB45: fts5SegIterHashInit (nds_sqlite3.c:210370)
==10856==by 0x4EB1BE: fts5MultiIterNew (nds_sqlite3.c:211319)
==10856==by 0x4EB5A8: fts5SetupPrefixIter (nds_sqlite3.c:212995)
==10856==by 0x4EAE4A: sqlite3Fts5IndexQuery (nds_sqlite3.c:213324)
==10856==by 0x4EAA61: fts5ExprNearInitAll (nds_sqlite3.c:205310)
==10856==by 0x4EA6F3: fts5ExprNodeFirst (nds_sqlite3.c:205827)
==10856==by 0x4EA5B4: sqlite3Fts5ExprFirst (nds_sqlite3.c:205885)
==10856==by 0x4E9D1A: fts5CursorFirst (nds_sqlite3.c:215420)
==10856==by 0x4E2DD3: fts5FilterMethod (nds_sqlite3.c:215702)
==10856==by 0x4672A7: sqlite3VdbeExec (nds_sqlite3.c:90382)
==10856==by 0x42F876: sqlite3Step (nds_sqlite3.c:81765)
==10856==by 0x42F51C: sqlite3_step (nds_sqlite3.c:81830)
==10856==by 0x4236B8: exec_prepared_stmt (shell.c:10469)
==10856==by 0x4104EB: shell_exec (shell.c:10776)
==10856==by 0x42414E: runOneSqlLine (shell.c:16136)
==10856==by 0x410C5A: process_input (shell.c:16236)
==10856==by 0x40752D: main (shell.c:16995)
==10856==  Uninitialised value was created by a heap allocation
==10856==at 0x4C2FE56: malloc (vg_replace_malloc.c:299)
==10856==by 0x4E0098: sqlite3MemMalloc (nds_sqlite3.c:22886)
==10856==by 0x441ED0: mallocWithAlarm (nds_sqlite3.c:26718)
==10856==by 0x425E0B: sqlite3Malloc (nds_sqlite3.c:26748)
==10856==by 0x425E7F: sqlite3_malloc64 (nds_sqlite3.c:26772)
==10856==by 0x4FE464: sqlite3Fts5HashWrite (nds_sqlite3.c:207636)
==10856==by 0x4FE1F0: sqlite3Fts5IndexWrite (nds_sqlite3.c:213247)
==10856==by 0x4FACB9: fts5StorageInsertCallback (nds_sqlite3.c:217629)
==10856==by 0x505B04: fts5UnicodeTokenize (nds_sqlite3.c:218923)
==10856==by 0x4F623D: sqlite3Fts5Tokenize (nds_sqlite3.c:204268)
==10856==by 0x4FA0DD: sqlite3Fts5StorageIndexInsert (nds_sqlite3.c:217984)
==10856==by 0x4F9E44: fts5StorageInsert (nds_sqlite3.c:215929)
==10856==by 0x4E359D: fts5UpdateMethod (nds_sqlite3.c:216036)
==10856==by 0x4677E6: sqlite3VdbeExec (nds_sqlite3.c:90593)
==10856==by 0x42F876: sqlite3Step (nds_sqlite3.c:81765)
==10856==by 0x42F51C: sqlite3_step (nds_sqlite3.c:81830)
==10856==by 0x4236B8: exec_prepared_stmt (shell.c:10469)
==10856==by 0x4104EB: shell_exec (shell.c:10776)
==10856==by 0x42414E: runOneSqlLine (shell.c:16136)
==10856==by 0x410C5A: 

Re: [sqlite] FTS5 Transaction Leads to NULL Pointer

2019-03-19 Thread Dan Kennedy


On 18/3/62 15:48, Chu wrote:

The code:

```
CREATE VIRTUAL TABLE t1 USING fts5(content);

INSERT INTO t1 VALUES('');

BEGIN ;
DELETE FROM t1 WHERE rowid = 1;
SELECT * FROM t1 WHERE content MATCH '';
INSERT INTO t1 VALUES('');
SELECT * FROM t1 WHERE content MATCH '';
END;
``



Thanks very much for isolating and reporting this problem, and the other 
one. Now fixed here:


  https://sqlite.org/src/info/45c73deb440496e8

Dan.




As you can see, it creates a virtual table with fts5, and run a transaction on 
it, this will leads to a crash because of null pointer. The ASAN report:

```
➜  sqlite-crashes ../sqlite-autoconf-3270200/sqlite3 < 1-null-pointer.sql
AddressSanitizer:DEADLYSIGNAL
=
==20822==ERROR: AddressSanitizer: SEGV on unknown address 0x (pc 
0x55df5393c60a bp 0x0001 sp 0x706021b0 T0)
==20822==The signal is caused by a READ memory access.
==20822==Hint: address points to the zero page.
 #0 0x55df5393c609 in fts5ChunkIterate 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:210934
 #1 0x55df5393ca5e in fts5SegiterPoslist 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:210970
 #2 0x55df5393d65d in fts5IterSetOutputs_Full 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:211177
 #3 0x55df5393f17e in fts5MultiIterNext 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:210732
 #4 0x55df539444e9 in fts5MultiIterNew 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:211309
 #5 0x55df5394702f in sqlite3Fts5IndexQuery 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:213266
 #6 0x55df5398a566 in fts5ExprNearInitAll 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:205261
 #7 0x55df5398a566 in fts5ExprNodeFirst 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:205778
 #8 0x55df5398ad3d in sqlite3Fts5ExprFirst 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:205836
 #9 0x55df5398af0d in fts5CursorFirst 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:215371
 #10 0x55df5398cc9d in fts5FilterMethod 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:215653
 #11 0x55df538a973a in sqlite3VdbeExec 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:90333
 #12 0x55df538c5439 in sqlite3Step 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:81716
 #13 0x55df538c5439 in sqlite3_step 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:81781
 #14 0x55df536f9662 in exec_prepared_stmt 
/root/Documents/sqlite-autoconf-3270200/shell.c:10445
 #15 0x55df536f9662 in shell_exec 
/root/Documents/sqlite-autoconf-3270200/shell.c:10752
 #16 0x55df536fbdf3 in runOneSqlLine 
/root/Documents/sqlite-autoconf-3270200/shell.c:16106
 #17 0x55df5370b466 in process_input 
/root/Documents/sqlite-autoconf-3270200/shell.c:16206
 #18 0x55df536d6c98 in main 
/root/Documents/sqlite-autoconf-3270200/shell.c:16967
 #19 0x7f5c4f52809a in __libc_start_main ../csu/libc-start.c:308
 #20 0x55df536d8599 in _start 
(/root/Documents/sqlite-autoconf-3270200/sqlite3+0x46599)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV 
/root/Documents/sqlite-autoconf-3270200/sqlite3.c:210934 in fts5ChunkIterate
==20822==ABORTING
```

View detail In gdb:

```
(gdb) r < 1-null-pointer.sql
The program being debugged has been started already.
Start it from the beginning? (y or n) Y
Starting program: /root/Documents/sqlite-autoconf-3270200/sqlite3 < 
1-null-pointer.sql
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

Breakpoint 1, 0x557fe60a in fts5ChunkIterate (p=p@entry=0x60d00ad8, 
pSeg=pSeg@entry=0x61300b28, pCtx=0x7fffac00,
 xChunk=xChunk@entry=0x55622dc0 ) at 
sqlite3.c:210934
210934pData = fts5LeafRead(p, FTS5_SEGMENT_ROWID(pSeg->pSeg->iSegid, 
pgno));
(gdb) bt
#0  0x557fe60a in fts5ChunkIterate (p=p@entry=0x60d00ad8, 
pSeg=pSeg@entry=0x61300b28, pCtx=0x7fffac00,
 xChunk=xChunk@entry=0x55622dc0 ) at 
sqlite3.c:210934
#1  0x557fea5f in fts5SegiterPoslist (p=0x60d00ad8, 
pSeg=0x61300b28, pColset=pColset@entry=0x602014b8, 
pBuf=pBuf@entry=0x61300ae8)
 at sqlite3.c:210970
#2  0x557ff65e in fts5IterSetOutputs_Full (pIter=0x61300ac8, 
pSeg=) at sqlite3.c:211177
#3  0x5580117f in fts5MultiIterNext (p=p@entry=0x60d00ad8, 
pIter=pIter@entry=0x61300ac8, bFrom=bFrom@entry=0, iFrom=iFrom@entry=0)
 at sqlite3.c:210732
#4  0x558064ea in fts5MultiIterNew (p=p@entry=0x60d00ad8, 
pStruct=pStruct@entry=0x60402458, flags=flags@entry=16,
 pColset=pColset@entry=0x602014b8, pTerm=, nTerm=nTerm@entry=5, 
iLevel=, nSegment=, ppOut=)
 at sqlite3.c:211309
#5  0x55809030 in sqlite3Fts5IndexQuery (p=0x60d00ad8, 
pToken=pToken@entry=0x60201498 "", nToken=4, 

Re: [sqlite] picking random subset of rows

2019-03-18 Thread Dan Kennedy


On 18/3/62 17:36, Kevin Martin wrote:

Hi,

I am trying to use a correlated subquery with an 'order by random() limit 2' to 
pick upto two random rows for each value in the outer query. I am not sure if I 
am doing this correctly, but the number of rows I am getting seems to vary 
randomly which doesn't make sense to me. If i replace the order by random() 
with order by product I always get the expected number of rows. I have tried to 
create a simplified version of the code below to replicate the issue.
  
with

   dareas as (select distinct
 area
   from
 test_productarea)
   select
 da.area,
 pa.product
   from
 dareas as da
 left join test_productarea as pa
   --if I don't order by random, but instead by product, I always get the 
correct number of rows
   on pa.product in (select product from test_productarea where 
da.area=area order by random() limit 2)


In SQLite, a correlated sub-query on the RHS of an IN(...) operator may 
be rerun every time the IN(...) test is required. And if that sub-query 
contains "random()" it might return a different result every time.


Your words suggest that you are hoping it will be run once for each 
different value of "da.area", with different results each time. But it 
will not.


Dan.





-- In the real query, this order by is required to see a different number of 
rows to the number in the
-- limit. However, it seems it is not required in the small test dataset.
--  order by
--da.area,
--pa.product
;
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Segmentation fault running a query

2019-03-06 Thread Dan Kennedy


On 6/3/62 16:37, Peter Hardman wrote:

So, I forgot I had emails from the list turned off

Integrity check shows no results.

The query fails from the CLI

The schema (of a very much cut down database) is attached.



Can you post the results of running the ".fullschema" command in the 
shell tool?


Thanks,

Dan.







And if it does still crash, please provide the database schema to help 
with debugging.



-Original Message-
From: sqlite-users [mailto:sqlite-users-bounces at 
mailinglists.sqlite.org] On Behalf Of Simon Slavin

Sent: Tuesday, March 05, 2019 3:14 PM
To: SQLite mailing list
Subject: Re: [sqlite] Segmentation fault running a query

On 5 Mar 2019, at 8:06pm, Peter Hardman somborneshetlands.co.uk> wrote:


> If i run the following query from Python 3.7 using the sqlite3 DBAPI 
adapter or from sqlitebrowser I get a segmentation fault.


Please find the SQLite command-line tool on your computer.  If you 
don't already have one you can download one for your platform in the 
section "Precompiled Binaries" on




Using that program, please run

PRAGMA integrity_check;

If that reports no problems (should return no results) then please run 
your SELECT query in that program just to verify that it gets the same 
result your own code does.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vtab scalar function overloading in aggregate queries

2019-02-27 Thread Dan Kennedy


On 27/2/62 05:47, Jake Thaw wrote:

This may not strictly be a bug, but currently (3.27.2) a vtab cannot
overload scalar functions in aggregate queries.

Adding a check for TK_AGG_COLUMN in sqlite3VtabOverloadFunction makes
my use case function as expected.


I think it has always been this way.

The tricky case is when there is a GROUP BY with an expression list that 
the virtual table cannot optimize as an ORDER BY. In that case the 
virtual table cursor will be closed before the overloaded function is 
invoked, which breaks the implementation of most overloaded functions 
(including the built-in ones).


Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk I/O errors

2019-02-23 Thread Dan Kennedy


On 24/2/62 00:55, Tim Streater wrote:

(sorry for the duplicate - vibrating finger).

I have a hosted web site using the SQLite functions from PHP. The page where 
PHP is used was failing, and on investigation this is because an SQLite 
function called from within PHP is now returning:

Code: 10 (SQLITE_IOERR)
Msg:  disk I/O error

I will be working with my hosting provider but, is there a way to get more 
specific information about this?


sqlite3_extended_errcode() will tell you a little more:

  https://www.sqlite.org/c3ref/errcode.html




SQLite version:  3.7.7.1
PHP version: 5.3.28

Thanks.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot Modify Table if Part of a View

2019-02-22 Thread Dan Kennedy


On 22/2/62 17:17, Chris Locke wrote:

This issue was found via DB Browser for SQLite, but relates to SQLite, not
DB Browser for SQLite, so please bear with me

If a table is part of a view, then SQLite complains that "Error: error in
view view1: no such table: main.table11"


ALTER TABLE ... RENAME TO ... commands fail in new versions of SQLite if 
the schema contains views or triggers that refer to tables or columns 
that do not exist. That's what is happening here. To restore the legacy 
behaviour, run:


  PRAGMA legacy_alter_table = 1;

  https://sqlite.org/pragma.html#pragma_legacy_alter_table

Dan.





The link to the full issue is here:
https://github.com/sqlitebrowser/sqlitebrowser/issues/1686

Martin has followed the instructions here (
https://www.sqlite.org/lang_altertable.html#otheralter) which raises the
above error.
(link to comment:
https://github.com/sqlitebrowser/sqlitebrowser/issues/1686#issuecomment-464136036
)
We are using SQLite 3.27.1 and 3.26 (in two different applications) and (as
per the above GitHub thread) 3.27.1 mentions fixing an ALTER TABLE bug
(item 11) with views.  However, there is no WITH clause and no redundant
UNIQUE clauses involved in our case.

Its worked fine in earlier versions of SQLite, and note a comment about the
improved ALTER TABLE functionality:
"In version 3.25.0, the ALTER TABLE statement has been enhanced to run a
test-parse of the whole schema after it alters the schema, to make sure
that the edits it performed on the schema didn't break anything."

it sounds like this extra sanity check is what is causing the problem
described here which also explains why it worked for older versions of
SQLite.

In this issue (https://www.sqlite.org/src/tktview?name=31c6e64ff9) drh
describes a very similar issue to the one here. He is using a trigger
instead of a view but besides that is is pretty much the same problem.
Again some more explanations from the ticket:

"The DROP TABLE removes the table t1, which leaves a dangling reference to
t1 in the trigger. Then during the ALTER TABLE, the trigger is reparsed,
but the reparse fails due to the dangling reference."

Again, it sounds exactly like the issue we are having here. The second
ticket is still open, so no fix there yet. Last modification was 2018-10-03.

Is there any confirmation or further details on this issue at all?


Thanks,
Chris
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Typo in FTS5 documentation

2019-02-18 Thread Dan Kennedy


On 17/2/62 02:21, J. King wrote:

 states:


the following three queries all specify the same phrase:

Four queries are then listed. Presumably it should read:


Thanks for pointing this out. Will be fixed next time the website is 
built and uploaded.


Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Was there an announcement of 3.27?

2019-02-11 Thread Dan Kennedy


On 9/2/62 03:31, Dominique Pellé wrote:

David Raymond wrote:


SQLite version 3.27.1 is now available on the SQLite website:

   https://sqlite.org/
   https://sqlite.org/download.html
   https://sqlite.org/releaselog/3_27_1.html

Release notes https://sqlite.org/releaselog/3_27_1.html say:

=== BEGIN QUOTE ===
Added the remove_diacritics=2 option to FTS3 and FTS5.
=== END QUOTE ===

I wonder that this does.  FTS3 or FTS5 doc were not updated,
since they only document remove_diacritics=0 and 1.


Thanks for reporting this. And the typos. I just added the following to 
the docs:


remove_diacritics:

  This option should be set to "0", "1" or "2". The default value is 
"1". If it is set to "1" or "2", then diacritics are removed from Latin 
script characters as described above. However, if it is set to "1", then 
diacritics are not removed in the fairly uncommon case where a single 
unicode codepoint is used to represent a character with more that one 
diacritic. For example, diacritics are not removed from codepoint 0x1ED9 
("LATIN SMALL LETTER O WITH CIRCUMFLEX AND DOT BELOW"). This is 
technically a bug, but cannot be fixed without creating backwards 
compatibility problems. If this option is set to "2", then diacritics 
are correctly removed from all Latin characters.



Dan.






I also use the opportunity to report a few typos in
https://sqlite.org/fts5.html:

- the second character replaced with an *asterix* (-> asterisk)
- fts5 extension function made as part *of of* (repeated word "of")
-  *an the* (-> the) auxiliary data is set to NULL

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_DBCONFIG_DEFENSIVE documentation looks confusing

2019-01-29 Thread Dan Kennedy


On 29/1/62 23:18, Chris Brody wrote:

I think my sample code should have read as follows:

To activate the "defensive" flag for a database connection:
sqlite3_db_config(db, SQLITE_DBCONFIG_DEFENSIVE, 1, NULL);

(I got the wrong prefix before, and I discovered that it crashes if I
do not add the NULL argument.)

I hope I got this right, really wish it were better documented.


Passing NULL is fine. It's the same interface as 
SQLITE_DBCONFIG_ENABLE_FKEY and most of the others. Unless it is NULL, 
the int indicated by the (int*) parameter is set to 0 or 1 to indicate 
whether the connection is in DEFENSIVE mode or not following the call.


  int bDefensive;
  sqlite3_db_config(db, SQLITE_DBCONFIG_DEFENSIVE, 1, );
  if( bDefensive ){
    printf("connection in defensive mode!\n");
  }else{
    printf("connection not in defensive mode :(\n");
  }

Dan.







On Tue, Jan 29, 2019 at 10:17 AM Chris Brody  wrote:

I am very sorry to say that I have found the usage of the
SQLITE_DBCONFIG_DEFENSIVE option to be somewhat confusing.

 From my first reading of https://www.sqlite.org/releaselog/3_26_0.html
I thought SQLITE_DBCONFIG_DEFENSIVE was a compile-time option. (I was
proven wrong pretty quickly.)

Then I found the following in
https://www.sqlite.org/c3ref/c_dbconfig_defensive.html to be
confusing:

Looking at this entry:

#define SQLITE_DBCONFIG_DEFENSIVE 1010 /* int int* */

seems to indicate to me that I should pass 2 integer values after
SQLITE_DBCONFIG_DEFENSIVE in the sqlite_db_config() call (which I
think is wrong). I found it especially confusing since the example
code for SQLITE_DBCONFIG_RESET_DATABASE shows 2 integers after
SQLITE_DBCONFIG_RESET_DATABASE.

I think there should be example code for SQLITE_DBCONFIG_DEFENSIVE
that reads something like this:

To activate the "defensive" flag for a database connection:
sqlite_db_config(db, SQLITE_DBCONFIG_DEFENSIVE, 1);

I think it would be ideal if there would be a compile-time flag that
would tell SQLite to enable the "defensive" flag by default whenever
the application opens a database.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Session extension, "INSERT OR REPLACE" and "WITHOUT ROWID"

2019-01-07 Thread Dan Kennedy

On 01/06/2019 08:08 PM, Daniel Kraft wrote:

Hi!

It seems to me that the session extension is broken in a situation that
involves a "WITHOUT ROWID" table and "INSERT OR REPLACE" statements (but
not if only one of those is used).  Note that I'm using SQLite version
3.26.0 (> 3.17.0), so the session extension should work also for my
WITHOUT ROWID table.

My test code is attached (it uses the Google logging library in addition
to SQLite3 with session extension).  This code creates a WITHOUT ROWID
table with an INTEGER PRIMARY KEY and puts in some initial data.  Then
it uses an "INSERT OR REPLACE" statement to update the initial row and
insert a new one, recording this change into a changeset.  Finally, it
inverts and applies the changeset, so that we should get back to the
initial data.

However, when I build and run the code as it is, then the table will be
*empty* in the end (instead of containing the initial data pair (100,
1)).  It works as expected if I make one of the following two changes:

a) Commenting out the "WITHOUT ROWID" line, or
b) using "UPDATE" and "INSERT" statements (as in the comment in the
code) instead of one "INSERT OR REPLACE" statement.

Am I doing something wrong here, or is this a bug?



I think it was a bug. Thanks for reporting it. Now fixed here:

  https://www.sqlite.org/src/info/6281ef974c0ac7a7

Dan.




 Is there something I

can do to make this work as expected?

Thanks!  Yours,
Daniel



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in 3.25.2 (RECURSIVE CTE + window function)

2018-12-31 Thread Dan Kennedy

On 12/30/2018 02:41 PM, Nicolas Roy-Renaud wrote:

I've been dealing with occasional segfaults on one of my applications
when running a similar query, except even on SQLite 3.26, the safeguard
wouldn't trigger.

Running the SQL query specified in the bug report description from the
tracker (https://www.sqlite.org/src/tktview?name=e8275b415a)
now yields either "Error: cannot use window functions in recursive
queries" or "Error: recursive reference in a subquery: q" if I try to
nest it.

Below is the query in question, modified to work as a self-contained
test case, but I have not managed to figure out how to reduce it
further. The query comes from a Markov chain project.
With the data contained in the CTE for chain, this query should print
"hello" and "world" 10 times, but crashes after 3.
Alternatively, it could also not run at all and simply return an error,
as seems to be the expected behavior for thee queries since SQLite 3.25.




Thanks for reporting this. No fix yet, but a further reduction here:

  https://sqlite.org/src/tktview/d0866b26f83e9c55e30d

I think this probably should work (not return the error message). 
Although technically there are window functions within a recursive 
sub-query, they don't operate on the recursively generated dataset 
(they're in a subquery) so I don't think the restriction applies.


Dan.






WITH RECURSIVE chain(link1, link2, n) AS (VALUES ('hello', 'world',
1), ('world', 'hello', 1)), markov(last_word, current_word,
random_const) AS ( VALUES('hello', 'world', ABS(random()) /
CAST(0x7FFF AS real)) UNION ALL SELECT
markov.current_word, ( SELECT link2 FROM ( SELECT link1, link2, n,
SUM(n) OVER (PARTITION BY link1 ROWS UNBOUNDED PRECEDING) AS rank,
SUM(n) OVER (PARTITION BY link1) * markov.random_const AS roll FROM
chain WHERE link1 = markov.current_word ) t WHERE roll <= rank LIMIT 1
) AS next_word, ABS(random()) / CAST(0x7FFF AS real) AS
random_const FROM markov WHERE current_word <> ' ' ) SELECT last_word
FROM markov LIMIT 10;


I've had no issue running that same query on PostgreSQL, and I have in
fact had it run to completion multiple times on SQLite 3.26 as well,
with very large datasets.

--
Nicolas Roy-Renaud


Richard Hipp Thu, 27 Sep 2018 06:13:36 -0700

Thanks for the report and test case.  Now fixed on trunk and on
branch-3.25.
On 9/25/18, Щекин Ярослав  wrote:
> Hello.
>
> Here's the self-contained test case:
>
> WITH t(id, parent) AS (
> SELECT CAST(1 AS INT), CAST(NULL AS INT)
> UNION ALL
> SELECT 2, NULL
> UNION ALL
> SELECT 3, 1
> UNION ALL
> SELECT 4, 1
> UNION ALL
> SELECT 5, 2
> UNION ALL
> SELECT 6, 2
> ), q AS (
> SELECT t.*, ROW_NUMBER() OVER (ORDER BY t.id) AS rn
>   FROM t
>  WHERE parent IS NULL
>  UNION ALL
> SELECT t.*, ROW_NUMBER() OVER (ORDER BY t.id) AS rn
>   FROM q
>   JOIN t
> ON t.parent = q.id
> )
> SELECT *
>   FROM q;
>
> Results in segmentation fault.
> (I also wanted to thank [Arfrever] (in #sqlite IRC) for testing /
> confirmation.)
>
> --
> WBR, Yaroslav Schekin.
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Window functions in sqlite 3.26.0

2018-12-23 Thread Dan Kennedy

On 12/24/2018 01:43 AM, Balaji Ramanathan wrote:

Hi,

Are window functions enabled by default in the sqlite command line
shell program that is available for download on the sqlite website?  I get
the error message "no such function: rownumber()" when I try to use that
window function.



They are in 3.26.0. Try "row_number", with an underscore. Or, if that's 
not the problem, please post the failing SQL statement.


Thanks,
Dan.



The documentation only mentions that window functions

were added in version 3.25.0, and does not give any indication as to
whether they are enabled or disabled in the command line tool.  Is there
perhaps a pragma setting I have to turn on for window functions to work?
Thank you.

Balaji Ramanathan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "make test" fails on Mageia Linux v7 x86-64

2018-12-19 Thread Dan Kennedy

On 12/19/2018 12:55 AM, Shlomi Fish wrote:

Hi,

With the attached bash script on Mageia Linux v7 x86-64 I consistently get the
test failures here:

https://www.shlomifish.org/Files/files/text/sqlite-mga7-rpm-build.txt.xz

this is with sqlite 3.26.0.

This affects our rpm %check phase.

Can you help?


What is your Tcl version (run the "testfixture" binary and type "set 
tcl_patchLevel" to find out)?


Upgrading Tcl will most likely fix this.

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Expression-index bug with OR query

2018-12-15 Thread Dan Kennedy

On 12/14/2018 07:22 AM, Jens Alfke wrote:

We’ve found a bug where creating indexes on expressions causes a query to give 
incorrect results. Our indexes use some complex custom functions, but the bug 
is easy to reproduce just using the ‘abs’ function:
SELECT * FROM docs WHERE abs(a)=2 OR abs(b)=9
After indexing ‘abs(a)’ and ‘abs(b)’, this query no longer returns correct 
results.
This seems related to the OR — if we query only abs(a)=2, or only abs(b)=9, it 
works correctly.


Thanks for reporting this. It was a bug:

  https://www.sqlite.org/src/info/d96eba87698a428c1ddd0790ea04
  https://www.sqlite.org/src/info/7e4ed8b5c2047b69

Dan.





The transcript below comes from the SQLite 3.24 shell.

### Populate database:
sqlite> create table docs (a int, b int);
sqlite> insert into docs (a, b) values (2, 4);
sqlite> insert into docs (a, b) values (3, 9);

### Simple OR query that correctly matches both docs:
sqlite> select * from docs where abs(a)=2 or abs(b)=9;
a = 2
b = 4

a = 3
b = 9
sqlite> explain query plan select * from docs where abs(a)=2 or abs(b)=9;
QUERY PLAN
`--SCAN TABLE docs

### Create indexes on the two expressions in the above query:
sqlite> create index ia on docs (abs(a));
sqlite> create index ib on docs (abs(b));

### Now repeat the query — it incorrectly finds only one doc, and the query 
plan looks bogus:
sqlite> select * from docs where abs(a)=2 or abs(b)=9;
a = 3
b = 9
sqlite> explain query plan select * from docs where abs(a)=2 or abs(b)=9;
QUERY PLAN
`--SEARCH TABLE docs USING INDEX ib (=?)

# And here’s the full explanation with bytecodes:
sqlite> explain select * from docs where abs(a)=2 or abs(b)=9;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 23000  Start at 23
1 OpenRead   0 2 0 2  00  root=2 iDb=0; docs
2 OpenRead   1 3 0 k(2,,) 02  root=3 iDb=0; ib
3 Once   0 11000
4 OpenEphemeral  3 1 0 k(1,)  00  nColumn=1
5 Integer2 2 000  r[2]=2
6 MakeRecord 2 1 3 A  00  r[3]=mkrec(r[2])
7 IdxInsert  3 3 2 1  00  key=r[3]
8 Integer9 2 000  r[2]=9
9 MakeRecord 2 1 3 A  00  r[3]=mkrec(r[2])
10IdxInsert  3 3 2 1  00  key=r[3]
11Rewind 3 22000
12  Column 3 0 100  r[1]=
13  IsNull 1 21000  if r[1]==NULL goto 
21
14  SeekGE 1 211 1  00  key=r[1]
15IdxGT  1 211 1  00  key=r[1]
16DeferredSeek   1 0 000  Move 0 to 1.rowid 
if needed
17Column 0 0 400  r[4]=docs.a
18Column 0 1 500  r[5]=docs.b
19ResultRow  4 2 000  output=r[4..5]
20  Next   1 15000
21NextIfOpen 3 12000
22Halt   0 0 000
23Transaction0 0 7 0  01  usesStmtJournal=0
24Goto   0 1 000

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minor glitch in FTS5 docs

2018-12-14 Thread Dan Kennedy

On 12/15/2018 08:45 AM, Jens Alfke wrote:

At https://sqlite.org/fts5.html#synonym_support
 :


There are several ways to approach this in FTS5: 1. By mapping all
synonyms to a single token. […] 2. By adding multiple synonyms for
a single term to the FTS index. […] 3. By adding multiple synonyms
for a single term to the FTS index. […]


Note that items 2 and 3 start with the same description. I had to
compare the following text in the paragraphs to try to figure out
what the difference is. It looks like a copy/paste error.

I believe that the first sentence of item 2 should be changed to
something like “By adding multiple synonyms for a single term while
parsing the FTS query”.


Thanks. Now fixed here:

  https://www.sqlite.org/src/info/4002790d9418289f

Dan.




—Jens ___ sqlite-users
mailing list sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Affinity of expression indexes

2018-12-13 Thread Dan Kennedy

On 12/13/2018 07:41 AM, Jens Alfke wrote:

Consider CREATE INDEX foo_idx ON tbl (myfunction(a)); where
‘myfunction’ is a deterministic C function I’ve registered with the
SQLite connection (and ‘a’ is a column of ‘tbl’ of course.)

SQLite has no idea what data type(s) ‘myfunction’ returns, and it
might well return different data types for different inputs. So
unlike a column index, there’s no natural type affinity.

Question: In such an index, does SQLite assume no affinity and just
compare the different types using the rules in section 4.1 of
“Datatypes In SQLite Version 3”? (I.e. numbers are compared as
numbers and sort before any strings, etc.)


Yes. Exactly.


In that case, I’m confused how this interacts with ‘applying
affinity’ in a query as described in section 4.2. For example, let’s
say that ‘myfunction’ always returns a number. In that case,
‘foo_idx’ will be sorted numerically. But if I do a query like SELECT
* FROM tbl WHERE myfunction(a) > ‘dog’; then the rules say that text
affinity will be applied to the function call since the other side of
the comparison is a string. In that case, the numbers it returns will
be interpreted as strings. That leads to an entirely different
sorting order, so the index can’t be used. But how does SQLite know
that?


Each column of each index has an associated affinity. As does each 
comparison expression in an SQL statement. An index can only used with a 
comparison if the affinities match.


In this case, the indexed expression has "no affinity", so values are 
stored in the index as is. The expression 'dog' also has no affinity (I 
think you're mistaken as to the rules - but please correct me if that's 
not the case), so the comparison also has no affinity (equivalent to 
BLOB affinity) and no type coercion is applied to the operands. Hence 
the index can be used for queries like:


   SELECT * FROM tbl WHERE myfunction(a) > ‘dog’;

However, if 'dog' were replaced by an expression that does have an affinity:

   SELECT * FROM tbl WHERE myfunction(a) > CAST(‘dog’ AS TEXT);

then the index could not be used, as the affinity of the comparison 
would be TEXT, not "no affinity" or BLOB.


Dan.








—Jens ___ sqlite-users
mailing list sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How can custom tokenizer tell it's parsing a search string?

2018-12-12 Thread Dan Kennedy

On 12/12/2018 03:37 AM, Jens Alfke wrote:

Is there any way for a custom FTS4 tokenizer to know when it’s
tokenizing a search string (the argument of a MATCH expression), as
opposed to text to be indexed?

Here’s my problem: I’ve implemented a custom tokenizer that skips
“stop words” (noise words, like “the” and “a” in English.) It works
well. But I’ve just gotten a bug report that some search strings with
wild-cards don’t work. For example, “mo* AND the*” would be expected
to match text containing the words “Moog” and “theremin”, but instead
the query fails with the SQLite error "malformed MATCH expression:
[mo* AND the*]”.

The reason for the error is that when the query runs, FTS4 uses my
tokenizer to break the search string into words. My tokenizer skips
“the” because it’s a stop word, so the sequence of tokens FTS4 gets
is “mo”, “*”, “AND”, “*” … which is invalid since there’s no prefix
before the second “*”.

I can fix this by preserving stop-words when the tokenizer is being
used to scan the search string. But I can’t find any way for the
tokenizer to tell the difference! It’s the same tokenizer instance
used for indexing, and the SQLite function getNextToken opens it in
the normal way and calls its xNext function.



I don't think there is any way to tell with FTS3/4. FTS5 passes a 
parameter to the tokenizer to indicate this (the mask of 
FTS5_TOKENIZER_* flags), but FTS3/4 does not. But you wouldn't have this 
problem with FTS5 anyhow, because it handles the AND or "*" syntax 
before passing whatever is left to the tokenizer.


  https://sqlite.org/fts5.html#custom_tokenizers

Leaving stop words in while parsing queries won't quite work anyway. If 
your tokenizer returns "the" when parsing a query, FTS3/4 will search 
for "the" in the index. And it won't be there if the tokenizer used for 
parsing documents stripped it out.


I think your best options might be to switch to FTS5 or to write a 
tokenizer smart enough to remove the AND or other syntax tokens when 
required.


Dan.





The best workaround I can think of is to make the tokenizer preserve
a stop-word when it’s followed by a “*” … but there are contexts
where this can happen in regular text being indexed, when the “*” is
a footnote marker or the end of a Markdown emphasis sequence.

—Jens ___ sqlite-users
mailing list sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Failure to rename table in 3.25 and 3.26

2018-12-05 Thread Dan Kennedy


On 12/03/2018 10:37 AM, Philip Warner wrote:

Tables with complex triggers (possibly limited to "Insert...With",
though that is not clear), fail with "no such table".


Thanks for reporting this. The bug was that table and column names 
within WITH clauses in the schema were not being updated correctly. Now 
fixed here:


  https://www.sqlite.org/src/info/f44bc7a8b3fac82a

Dan.





The following produces the error in 3.26; a much simpler trigger does
not produce the error.

|Create Table LOG_ENTRY(||
||LOG_ENTRY_ID int primary key,||
||TIME_START Long,||
||TIME_FINISH Long||
||);||
||
||Create Table SEGMENTS(START Long, FINISH Long);||
||
||Create View SEGMENTS_REBUILD_PV(START, FINISH) as Select Null, Null;||
||
||Drop Trigger if Exists SEGMENTS_REBUILD_PV_TG;||
||Create Trigger SEGMENTS_REBUILD_PV_TG ||
||Instead of Insert on SEGMENTS_REBUILD_PV||
||-- Recreate the SEGMENTS entries in the specified range||
||Begin||
||Delete from SEGMENTS Where ||
||START <= New.FINISH ||
||And FINISH >= New.START;||
||
||Insert or Replace into SEGMENTS(START, FINISH) ||
||With GAP(START, FINISH) as ||
||(Select||
||-- Nearest break before New.START (or New.START,||
||-- if nothing before).||
||Max( Coalesce((Select Max(TIME_FINISH)||
||From LOG_ENTRY E||
||Where E.TIME_FINISH < New.START), New.START),||
||Coalesce((Select Max(TIME_START)||
||From LOG_ENTRY E||
||Where E.TIME_START < New.START), New.START)||
||),||
||-- Nearest break after New.FINISH (or New.FINISH ,||
||-- if nothing after).||
||Min(Coalesce((Select Min(TIME_START)||
||From LOG_ENTRY E||
||Where E.TIME_START > New.FINISH), New.FINISH),||
||Coalesce((Select Min(TIME_FINISH)||
||From LOG_ENTRY E||
||Where E.TIME_START > New.FINISH), New.FINISH))||
||),||
||LOGS as||
||(Select * from LOG_ENTRY E, GAP||
||Where E.TIME_START <= GAP.FINISH||
||And E.TIME_FINISH >= GAP.START||
||)||
||Select Distinct B.START, B.FINISH From||
||GAP,||
||(Select||
||BREAK as START, ||
||Lead(BREAK) Over (Order by BREAK) as FINISH||
||From ||
||( ||
||Select Distinct TIME_START as BREAK from LOGS||
||UNION||
||Select Distinct TIME_FINISH as BREAK from LOGS||
||)||
||) B ||
||Where B.FINISH is Not NULL ||
||and B.START < GAP.FINISH||
||and B.FINISH >= GAP.START||
||;||
||End;||
|

|Alter table LOG_ENTRY Rename To ZZZ;|

|Error: error in trigger SEGMENTS_REBUILD_PV_TG after rename: no such
table: main.LOG_ENTRY|


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug?: unexpected behaviour - alter parent table in a deferred foreign key relationship

2018-11-29 Thread Dan Kennedy

On 11/29/2018 07:10 PM, szmate1618 wrote:

I'm afraid I still didn't make any progress on this.

It looks to me that PRAGMA foreign_key_check; and the deferred foreign key
mechanism sometimes contradict each other,
consequently at least one of these features must have a bug, but I don't
know for sure which one,
so I can't risk using any of them in production until the situation is
resolved.

Do you have any suggestion how I should go about it? Can I escalate this to
someone?


If there is a bug, please do. I haven't actually understood the 
explanation of it yet though.


Or what is wrong with:

  PRAGMA foreign_keys = 0;
  BEGIN;

  if( [PRAGMA foreign_key_check]=="ok" ) COMMIT else ROLLBACK;
  PRAGMA foreign_keys = 1;


Dan.





Thanks in advance,
Máté

szmate1618  ezt írta (időpont: 2018. nov. 23., P,
17:21):


"PRAGMA foreign_keys = ?" is a property of the connection only, not the
database file. So what advantage would there be in including the PRAGMA
statements in the body of a transaction?


Not much, if I do everything right, I guess. But what if I break the
foreign key integrity?
If it's inside a transaction, I can rollback easily, given that I realize
I broke it.
But if I don't, because foreign key checks are turned off, and I commit
everything before noticing that
something's wrong, that's a lot harder to fix.

I can either take extra care not to break anything, or just do a backup of
the database file before
'risky' transactions, but it would be much easier if I could just use
deferred foreign keys that don't
let me commit if the database is in an inconsistent state.

Máté

Dan Kennedy  ezt írta (időpont: 2018. nov. 23., P,
16:30):


On 11/23/2018 09:54 PM, szmate1618 wrote:

Dear list members,

I have the following problem, with which I'd like to request your aid:

Currently, at version 3.25.2, SQLite only has a limited support for

alter

table. E.g. you cannot change the datatype (type affinity) of a column,

or

drop a column.

The usual workaround is to create a new table with the desired schema,

fill

it with data from the original table, drop the original table, and

rename

the new one. But what if the original table is a parent table in a

foreign

key relationship?

The official solution
<https://www.sqlite.org/lang_altertable.html#otheralter> is turning

foreign

keys off, making the changes you want, then turning foreign keys on. But
I'm slightly annoyed this cannot be done in a transaction (because

these PRAGMA

foreign_keys =s don't take effect inside of transactions, so they need

to

be issued before and after).



"PRAGMA foreign_keys = ?" is a property of the connection only, not the
database file. So what advantage would there be in including the PRAGMA
statements in the body of a transaction?

Dan.





I'd like to use deferred foreign keys instead. I have 3 queries, one of
them seems to work, the two others do not. My questions are the

following:


   - Does the seemingly working query work by design? Or it's just a
   fortunate(?) constellation of multiple factors, and depending on

other

   tables or new data in the database it might break in the future?

Somewhat

   like undefined behavior in C++?
   - Why do the other ones not work? How are they different from the

first

   one?

Setup

PRAGMA foreign_keys = OFF;
DROP TABLE IF EXISTS Parent;CREATE TABLE Parent(A TEXT UNIQUE,
COLUMN_TO_DROP FLOAT);INSERT INTO Parent VALUES('whatever', 0.0);
DROP TABLE IF EXISTS Child;CREATE TABLE Child(C TEXT REFERENCES
Parent(A) DEFERRABLE INITIALLY DEFERRED);INSERT INTO Child
VALUES('whatever');

PRAGMA foreign_keys = ON;

Query1 - seems to be working as intended

BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT INTO Parent SELECT *
FROM Temp;DROP TABLE Temp;COMMIT;

Query2 - create [...] as select [...] fails

BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent AS SELECT * FROM Temp; -- different
from Query1CREATE UNIQUE INDEX ParentIndex on Parent(A);  -- different
from Query1DROP TABLE Temp;COMMIT;

Result:

sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent(A TEXT);CREATE
UNIQUE INDEX ParentIndex on Parent(A);
sqlite> SELECT * FROM Parent;
whatever

Query3 - insert into [...] fails

BEGIN TRANSACTION;CREATE TABLE Temp (A TEXT UNIQUE);  -- different
from Query1INSERT INTO Temp SELECT A FROM Parent;  -- different from
Query1DROP TABLE Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT
INTO Parent SELECT * FROM Temp;DROP TABLE Temp;COMMIT;

Result:

sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent (A TEXT UNIQUE);
sqlite> SELECT * FROM Parent;
whatever

Note that PRAGMA foreign_key_check does not report an

Re: [sqlite] Bug?: unexpected behaviour - alter parent table in a deferred foreign key relationship

2018-11-23 Thread Dan Kennedy

On 11/23/2018 10:47 PM, Thomas Kurz wrote:

To what I've learned so far, SQlite stores all data "as is" into any column 
regardless of the column declaration. The affinity only matters upon reading, am I 
correct? If so, would it be a big deal implementing ALTER TABLE ALTER COLUMN?


Affinity changes are applied before data is written to the database. As 
you say though, implementing ALTER TABLE to change the type would be 
easier if they were not.


Dan.






- Original Message -----
From: Dan Kennedy 
To: sqlite-users@mailinglists.sqlite.org 
Sent: Friday, November 23, 2018, 16:30:12
Subject: [sqlite] Bug?: unexpected behaviour - alter parent table in a deferred 
foreign key relationship

On 11/23/2018 09:54 PM, szmate1618 wrote:

Dear list members,



I have the following problem, with which I'd like to request your aid:



Currently, at version 3.25.2, SQLite only has a limited support for alter
table. E.g. you cannot change the datatype (type affinity) of a column, or
drop a column.



The usual workaround is to create a new table with the desired schema, fill
it with data from the original table, drop the original table, and rename
the new one. But what if the original table is a parent table in a foreign
key relationship?



The official solution
<https://www.sqlite.org/lang_altertable.html#otheralter> is turning foreign
keys off, making the changes you want, then turning foreign keys on. But
I'm slightly annoyed this cannot be done in a transaction (because these PRAGMA
foreign_keys =s don't take effect inside of transactions, so they need to
be issued before and after).



"PRAGMA foreign_keys = ?" is a property of the connection only, not the
database file. So what advantage would there be in including the PRAGMA
statements in the body of a transaction?

Dan.





I'd like to use deferred foreign keys instead. I have 3 queries, one of
them seems to work, the two others do not. My questions are the following:



   - Does the seemingly working query work by design? Or it's just a
   fortunate(?) constellation of multiple factors, and depending on other
   tables or new data in the database it might break in the future? Somewhat
   like undefined behavior in C++?
   - Why do the other ones not work? How are they different from the first
   one?



Setup



PRAGMA foreign_keys = OFF;
DROP TABLE IF EXISTS Parent;CREATE TABLE Parent(A TEXT UNIQUE,
COLUMN_TO_DROP FLOAT);INSERT INTO Parent VALUES('whatever', 0.0);
DROP TABLE IF EXISTS Child;CREATE TABLE Child(C TEXT REFERENCES
Parent(A) DEFERRABLE INITIALLY DEFERRED);INSERT INTO Child
VALUES('whatever');



PRAGMA foreign_keys = ON;



Query1 - seems to be working as intended



BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT INTO Parent SELECT *
FROM Temp;DROP TABLE Temp;COMMIT;



Query2 - create [...] as select [...] fails



BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent AS SELECT * FROM Temp; -- different
from Query1CREATE UNIQUE INDEX ParentIndex on Parent(A);  -- different
from Query1DROP TABLE Temp;COMMIT;



Result:



sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent(A TEXT);CREATE
UNIQUE INDEX ParentIndex on Parent(A);
sqlite> SELECT * FROM Parent;
whatever



Query3 - insert into [...] fails



BEGIN TRANSACTION;CREATE TABLE Temp (A TEXT UNIQUE);  -- different
from Query1INSERT INTO Temp SELECT A FROM Parent;  -- different from
Query1DROP TABLE Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT
INTO Parent SELECT * FROM Temp;DROP TABLE Temp;COMMIT;



Result:



sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent (A TEXT UNIQUE);
sqlite> SELECT * FROM Parent;
whatever



Note that PRAGMA foreign_key_check does not report any problem in any of
the cases.




I posted an identical question on StackOverflow, but no one was able to
provide any information so far. Thanks in advance!



Máté Szabó
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug?: unexpected behaviour - alter parent table in a deferred foreign key relationship

2018-11-23 Thread Dan Kennedy

On 11/23/2018 09:54 PM, szmate1618 wrote:

Dear list members,

I have the following problem, with which I'd like to request your aid:

Currently, at version 3.25.2, SQLite only has a limited support for alter
table. E.g. you cannot change the datatype (type affinity) of a column, or
drop a column.

The usual workaround is to create a new table with the desired schema, fill
it with data from the original table, drop the original table, and rename
the new one. But what if the original table is a parent table in a foreign
key relationship?

The official solution
 is turning foreign
keys off, making the changes you want, then turning foreign keys on. But
I'm slightly annoyed this cannot be done in a transaction (because these PRAGMA
foreign_keys =s don't take effect inside of transactions, so they need to
be issued before and after).



"PRAGMA foreign_keys = ?" is a property of the connection only, not the 
database file. So what advantage would there be in including the PRAGMA 
statements in the body of a transaction?


Dan.





I'd like to use deferred foreign keys instead. I have 3 queries, one of
them seems to work, the two others do not. My questions are the following:

   - Does the seemingly working query work by design? Or it's just a
   fortunate(?) constellation of multiple factors, and depending on other
   tables or new data in the database it might break in the future? Somewhat
   like undefined behavior in C++?
   - Why do the other ones not work? How are they different from the first
   one?

Setup

PRAGMA foreign_keys = OFF;
DROP TABLE IF EXISTS Parent;CREATE TABLE Parent(A TEXT UNIQUE,
COLUMN_TO_DROP FLOAT);INSERT INTO Parent VALUES('whatever', 0.0);
DROP TABLE IF EXISTS Child;CREATE TABLE Child(C TEXT REFERENCES
Parent(A) DEFERRABLE INITIALLY DEFERRED);INSERT INTO Child
VALUES('whatever');

PRAGMA foreign_keys = ON;

Query1 - seems to be working as intended

BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT INTO Parent SELECT *
FROM Temp;DROP TABLE Temp;COMMIT;

Query2 - create [...] as select [...] fails

BEGIN TRANSACTION;CREATE TABLE Temp AS SELECT A FROM Parent;DROP TABLE
Parent;CREATE TABLE Parent AS SELECT * FROM Temp; -- different
from Query1CREATE UNIQUE INDEX ParentIndex on Parent(A);  -- different
from Query1DROP TABLE Temp;COMMIT;

Result:

sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent(A TEXT);CREATE
UNIQUE INDEX ParentIndex on Parent(A);
sqlite> SELECT * FROM Parent;
whatever

Query3 - insert into [...] fails

BEGIN TRANSACTION;CREATE TABLE Temp (A TEXT UNIQUE);  -- different
from Query1INSERT INTO Temp SELECT A FROM Parent;  -- different from
Query1DROP TABLE Parent;CREATE TABLE Parent (A TEXT UNIQUE);INSERT
INTO Parent SELECT * FROM Temp;DROP TABLE Temp;COMMIT;

Result:

sqlite> PRAGMA foreign_key_check;
sqlite> .schemaCREATE TABLE Child(C TEXT REFERENCES Parent(A)
DEFERRABLE INITIALLY DEFERRED);CREATE TABLE Parent (A TEXT UNIQUE);
sqlite> SELECT * FROM Parent;
whatever

Note that PRAGMA foreign_key_check does not report any problem in any of
the cases.


I posted an identical question on StackOverflow, but no one was able to
provide any information so far. Thanks in advance!

Máté Szabó
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 'Database table is locked' error with libgpkg and SQLite >= 3.24.0

2018-11-21 Thread Dan Kennedy

On 11/21/2018 11:01 PM, Benjamin Stadin wrote:

Hi,

I've forked libgpkg on Github to merge fixes from other repositories and update 
the embedded SQLite (was at 3.8.). Though in the RTREE tests I get a 'database 
table is locked' error for the update statements (see exact error message 
below). I ran the tests manually in the gpkg shell and could not reproduce the 
issue for the same sequence of commands. The tests are written in ruby, which 
could explain the different behaviour.

I tried different versions of SQLite and can confirm the RTREE tests for SQLite 
versions prior to 3.24.0 succeed (tested with 3.23.1 and 3.19.3). All recent 
versions break tests with mentioned error (tested versions include 3.24.0, 
3.25.0, 3.25.3, current snapshot).



As of 3.24.0, rtree prevents you from writing to a table while the same 
connection has an active SELECT on the same table. More detail in this 
thread:



http://sqlite.1065341.n5.nabble.com/3-24-database-table-is-locked-td102856.html

Dan.





To reproduce the issue:
- Clone the repository at https://github.com/benstadin/libgpkg
- Configure with tests and RTREE:
cmake -DCMAKE_BUILD_TYPE:STRING=Release -DGPKG_TEST:BOOL=on 
-DSQLITE_ENABLE_RTREE=1 .
- Build
make
- Run the tests with detailed error messages:
make CTEST_OUTPUT_ON_FAILURE=1 test

Cheers
Ben

Error message:

1) CreateSpatialIndex should create working spatial index
 Failure/Error: expect("UPDATE test SET geom = GeomFromText('POINT(2 2)') WHERE 
id = 1").to have_result nil
   expected UPDATE test SET geom = GeomFromText('POINT(2 2)') WHERE id = 1 
to have result nil but raised error 'database table is locked'
 # ./rtree_spec.rb:41:in `block (2 levels) in '

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug: SQLite shell does not build on Windows with SQLITE_OMIT_COMPLETE

2018-11-16 Thread Dan Kennedy

On 11/16/2018 08:41 PM, Victor Costan wrote:

The amalgamation build ships two definitions of sqlite3_complete(). Most
builds use the one in sqlite3.c. If SQLITE_OMIT_COMPLETE is defined,
shell.c supplies a stub definition -- thank you very much for adding that!

Unfortunately, the stub definition in shell.c does not have the SQLITE_API
prefix. This causes our Windows compiler to complain. Can you please add
SQLITE_API to the shell.c line that starts with "int sqlite3_complete("?

Thank you very much,
Victor
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



Thanks for reporting this. Should now be fixed here:

  https://www.sqlite.org/src/info/d584a0cb51281594

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Built-in Window Functions and Filter

2018-11-15 Thread Dan Kennedy

On 11/15/2018 02:56 AM, Douglas Hall wrote:

From https://www.sqlite.org/windowfunctions.html:
"It is a syntax error to specify a FILTER clause as part of a built-in window 
function invocation."

I'm curious, why is this the case? Why is a FILTER clause only allowed when 
used with aggregate functions?


One real reason is because postgres (and I think everybody else at the 
moment) disallows them.


Another is that it's not always clear what FILTER expressions should do 
for built-in window functions. How does it affect row_number(), for example?


Dan.





Thanks,
Douglas Hall
CONFIDENTIALITY NOTICE: This e-mail contains PRIVILEGED AND CONFIDENTIAL 
information intended only for the use of the individual(s) or entity named 
above. If you are not the intended recipient, you are notified that any 
disclosure, copying, distribution, electronic storage or use of this 
communication is prohibited. If you received this communication in error, 
please notify me immediately by e-mail, attaching the original message, and 
delete the original message from your computer and any network to which your 
computer is connected.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Documentation of SQLite

2018-11-10 Thread Dan Kennedy

On 11/10/2018 05:44 PM, Patrik Nilsson wrote:

Dear All,

Where can I find the downloadable HTML documentation? Not so long ago I
could download it on the download page.


It was forgotten for the 3.25.3 release. It's back up now.

Dan.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] found a glitch in ALTER TABLE RENAME (3.25.x)

2018-11-09 Thread Dan Kennedy

On 11/10/2018 08:18 AM, Keith Medcalf wrote:


On Friday, 9 November, 2018 16:20, J. King wrote:

To: SQLite mailing list



That could lead to loss of referential integrity when modifying a
table in a way not supported by ALTER TABLE, I believe.  One usual
method is to turn foreign keys off, rename the old table, create a
new modified table under the old name, insert the old table's data
into the new, and then drop the old table. If foreign keys were, for
lack of a better term, partially enforced, this process would yield
new and backwards-incompatible results: when the old table is renamed
all referring tables would continue to refer to the old table, and
when the table is dropped all references would become invalid.


Foreign Key constraints should be updated notwithstanding the status of PRAGMA 
foreign_keys at the time the ALTER command is given.  The current behaviour is 
inconsistent and incorrect.

If one wants the "legacy behaviour" for the ALTER commands because one wants those 
commands to operate in the legacy fashion, then one should be using the pragma that has been 
provided for that purpose.  This would revert to the "legacy" way of doing things.  Which 
means doing whatever was done in legacy processing of the ALTER commands including the handling of 
foreign key constraints.



I think this is a good answer. Now implemented here:

  https://sqlite.org/src/info/ae9638e9c0ad0c36

Dan.







https://sqlite.org/pragma.html#pragma_legacy_alter_table

When processing an "ALTER" statement under the "new" (vs the legacy) rules, leaving a 
dependancy on some other setting is surprising.  Unless one specifically requests "legacy" 
processing, the processing of an ALTER statement should never result in a database which is self-inconsistent.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



On November 9, 2018 2:50:56 AM EST, Dominique Devienne
 wrote:

On Fri, Nov 9, 2018 at 8:26 AM Simon Slavin 
wrote:


On 9 Nov 2018, at 7:11am, Hick Gunter  wrote:

Foreign keys are ignored by default and need to be explicitly

enabled. I

would expect this to include everything that relates to foreign

keys.

I've casually discovered that the behavior of ALTER TABLE RENAME

TO

(versione 3.25.x) seems to be affected by an odd glitch; FOREIGN

KEY

constraints are updated as expected only when PRAGMA

foreign_keys=1,

otherwise they are just ignored.

While your statement is correct, I would argue that the behaviour

is

harmful, and cannot possibly be what any programmer could want to

happen.

Think how difficult it would be to diagnose this having happened,

and

then

fix the problem it would create.
I might argue that correct behaviour would be to transform FOREIGN

KEY

constraints even when the PRAGMA is off.  Or I might argue that

the

attempt

should generate an error result (SQLITE_MISUSE ?).



Foreign keys are ignored in the sense they are not enforced, but

they

are
still parsed AFAIK.
As such, I agree with Simon that it's harmful to not update them on
table
(or column) renames. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


--
J. King
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] race condition?

2018-10-29 Thread Dan Kennedy

On 10/29/2018 03:45 PM, Lei Chen wrote:

Hi experts,

I'm debugging a tricky issue related to sqlite(3.9.2) database access. This
is on Linux 3.2 kernel. Basically, when the failure occurs, there are two
processes accessing the same -journal file concurrently, see below log.
When daemon scsitgtd wants to "commit" a transaction, it finds that the
-journal file was already deleted by someone(actually, it's procmon daemon,
which needs to access the same database to retrieve some info).

This issue happens intermittently, having something to do with timing.

Having studied the sqlite code and documents, I think the database file
should have been *locked *when the -journal is created, upon "commit". In
theory, no other processes could have got the lock and "rollback"ed the
hot-journal. However, we did see procmon slipped in. Does anybody know if
this is a known issue in the old sqlite version? Or how can I continue to
debug the lock contention issue?


Not a known issue.

There are some common problems regarding locking enumerated here:

  https://www.sqlite.org/howtocorrupt.html#_file_locking_problems

In practice, the ones in sections 2.2 and 2.2.1 seem to come up most often.

Dan.













*>>> 1. "joirnal" file is detectedSep 18 03:34:23 procmon: INFO: Hot
journal detected: /registry/m0/scsitgtd.db3-journalSep 18 03:34:23 procmon:
INFO: SQLITE: rc=539, recovered 9 pages from
/registry/m0/scsitgtd.db3-journal>>> 2. commit failed because "journal"
file is missing Sep 18 03:34:23 scsitgtd[26949]: ERROR: Registry
/registry/m0/scsitgtd.db3 exec("commit") error 5898: disk I/O error
(retries=0)Sep 18 03:34:23 scsitgtd[26949]: INFO: SQLITE: rc=1, statement
aborts at 2: [rollback] cannot rollback - no transaction is activeSep 18
03:34:23 scsitgtd[26949]: ERROR: Registry /registry/m0/scsitgtd.db3
exec("rollback") error 1: cannot rollback - no transaction is active
(retries=0)Sep 18 03:34:23 scsitgtd[26949]: ERROR:  Error 5551 committing
transaction: SQLite error 5898 on registry /registry/m0/scsitgtd.db3 during
exec("commit")*

Please copy me when you kindly reply.

Thanks,
Lei Chen
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optmize queries on ranges

2018-10-26 Thread Dan Kennedy

On 10/26/2018 02:27 PM, siscia wrote:

Hi all,

thanks for your suggestions, unfortunately, I already tried all of them,
except for the rtrees.

Actually, my request for help wasn't complete.

The ranges I am storing in the table are not overlapping.

To make an example in SQL.

The following can be in the dataset:
INSERT INTO ranges(1, 10, 5);
INSERT INTO ranges(15, 29, 8);
INSERT INTO ranges(30, 32, 9);

However, there will never be something like:
INSERT INTO ranges(1, 10, 5);
INSERT INTO ranges(5, 15, 8); -- impossible, overlap with the first one

So all the queries are actually:

`SELECT value FROM ranges WHERE (? BETWEEN start AND end) LIMIT 1`

Now suppose there is an index on start and so we are looking for (start < ?)

What happen could be that we begin from (start = 0) and move up to (start <=
?) which is basically a full scan.
Or we could begin from (start <= ?) and move down towards (start = 0) which
would be optimal.



In SQL, I guess that is:

  SELECT value FROM ranges WHERE (? BETWEEN start AND end)
  ORDER BY start DESC LIMIT 1

Or, perhaps more efficient for the cases where there is no such range:

  SELECT value FROM (
SELECT value, start, end FROM ranges
WHERE start <= ?
ORDER BY start DESC LIMIT 1
  ) WHERE end >= ?

Dan.




I am afraid that we are hitting the first case, which really is a pity.

Is there a way to suggest to the index how to work on these cases?

Cheers,

Simone







--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optmize queries on ranges

2018-10-25 Thread Dan Kennedy

On 10/25/2018 11:13 PM, siscia wrote:

Hi all,

I am facing an interesting optimization problem.

I have a table like this:

CREATE TABLE ranges (
start int,
end int,
value int,
);

The query that I am interested in optimizing is "select value from ranges
where (? between start and end)"

The max performance that I was able to get is 250 results/second with a
covering index on all three columns.

Now, if I do a more classic "select value from ranges where start = ?" this
provides 14 results/second

So I am pretty sure that I am doing something quite wrong.

Do you guys have any idea of what it could be? How can I obtain better
results?


Your query is the same as "start <= ? AND end >= ?". The trouble is that 
SQlite can only use the index to optimize one of "start <= ?" or "end >= 
?". And so you might be iterating through a very large set of records to 
extract the ones you want.


R-tree might work for you:

  https://sqlite.org/rtree.html

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] union + window functions = sqlite crash (version 3.25.2)

2018-10-23 Thread Dan Kennedy

On 10/23/2018 03:13 AM, Peter Ďurica wrote:

Table with sample data:
*create table t(a int, b int);*
*insert into t values(1,11);*
*insert into t values(2,12);*

now query using any window function (row_number, rank, ) after UNION or
UNION ALL will cause sqlite.exe crash (no regular error)

for example:
*select a, rank() over(order by b) from t *
*union all *
*select a, rank() over(order by b desc) from t;*
WILL CRASH

but single statement is fine

*select a, rank() over(order by b desc) from t;  *

and also window function just before union is fine
*select a, rank() over(order by b) from t *
*union all *
*select a, b from t;*

when I used e_sqlite3.dll (https://github.com/ericsink/SQLitePCL.raw) from
my c# app, I got error below
Exception thrown at 0x7FFF563BF797 (e_sqlite3.dll) in WebLES.exe:
0xC005: Access violation reading location 0x0008.

I believe it's a bug in core sqlite, which should be fixed



It is indeed, thanks for reporting it. Should now be fixed here:

  https://www.sqlite.org/src/info/059ff53a46c7f1e4

Dan.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64-column covering index limit clarification

2018-10-21 Thread Dan Kennedy

On 10/19/2018 02:30 AM, Deon Brewis wrote:

Hi,

I seem to have run into a limit where SQLITE doesn't use an index correctly if 
an indexed column is over the 64th column in the table. It's a partial index 
like:

CREATE INDEX idx ON
  table(A, B DESC, C, D)
  WHERE A > 0

Where A and B are columns 70 and 72 on 'table'.

I know about the 64-column limitation for covering indexes:
http://sqlite.1065341.n5.nabble.com/Max-of-63-columns-for-a-covering-index-to-work-td68945.html

However, this isn't a covering index, it's a partial index. But it seems to run into the same 
limit. Even if I forced in the index into a query it still does a "USE TEMP B-TREE" at 
the end to satisfy a simple "ORDER BY A, B DESC" query. After I re-ordered the table, it 
magically started working.



Can you post an SQL script that demonstrates the problem?

Running the script below here, the partial index is used to optimize the 
ORDER BY in the query.


Thanks,
Dan.




CREATE TABLE t1(
  c0, c1, c2, c3, c4, c5, c6, c7, c8, c9,
  c10, c11, c12, c13, c14, c15, c16, c17, c18, c19,
  c20, c21, c22, c23, c24, c25, c26, c27, c28, c29,
  c30, c31, c32, c33, c34, c35, c36, c37, c38, c39,
  c40, c41, c42, c43, c44, c45, c46, c47, c48, c49,
  c50, c51, c52, c53, c54, c55, c56, c57, c58, c59,
  c60, c61, c62, c63, c64, c65, c66, c67, c68, c69,
  c70, c71, c72, c73, c74, c75, c76, c77, c78, c79,
  c80, c81, c82, c83, c84, c85, c86, c87, c88, c89
);

CREATE INDEX i1 ON t1(c80, c81 DESC, c82, c83) WHERE c80>0;
EXPLAIN QUERY PLAN SELECT * FROM t1 WHERE c80>0 ORDER BY c80, c81 DESC;




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [SQLITE]ignoring syntax errors during search for sth include "symbols"

2018-10-20 Thread Dan Kennedy

On 10/18/2018 04:58 PM, Maziar Parsijani wrote:

Hi,
how could I ignore syntax errors like this?


*SELECT *,*

* highlight(searchsimpleenhanced, 2, '', '') text*

* FROM searchsimpleenhanced*

* WHERE searchsimpleenhanced MATCH 'sth][';*




You can use double quotes to search for a token that contains characters 
that are special to FTS5. e.g.


  WHERE col MATCH '"sth]["'

is not a syntax error.

Dan.




there maybe nothing to match but I don't like to get syntax errors for a
symbol or character like([ ] () , . @ ...)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Hot-Journal with VFS

2018-10-20 Thread Dan Kennedy

On 10/18/2018 09:27 PM, Bob schwanzer wrote:

Hi,
 I'm seeing hot journal frequently in our SQLite installation. We use VFS
which does some checksum and other operations. There are no abnormal
scenarios such as power off, process crash, abandoned transactions... DB is
opened by 10-20 processes each of which can have multiple threads.

Old version of SQLite: 3.7.13
Process A is in write mode. Process B which is reading a value. Process B
used to sees hot-journal.
Now, we upgraded to 3.9.2
Process A is in write mode. Process B is trying to read a value. B  sees a
hot-journal and recovers it.
Process A will get error introduced in new new package: "The xDelete method
in the built-in VFSes now return SQLITE_IOERR_DELETE_NOENT if the file to be
deleted does not exist"

Few things stand out:
Hot journal was reported in 3.7.13. There are no logs to indicate it's
recovery.
Hot journal is seen in 3.9.2. There are logs to indicate it's rollback.
Committer Process(Process A) gets error.

I know some optimizations in HardDisk or OS is creating the problem. What
other layers could contribute to this anomaly?


Sounds like the locking function (sqlite3_io_methods.xLock callback) is 
broken. Process A should be holding at least RESERVED on the db file 
while creating/using a journal file as part of a write transaction. And 
process B requires EXCLUSIVE to rollback and delete the journal. Those 
two are supposed to be mutually exclusive:


  https://www.sqlite.org/lockingv3.html

A hunch: If you're (a) using unix, and open()ing and close()ing any file 
descriptors on SQLite databases, this POSIX bug may be tripping you up 
somehow:



https://www.sqlite.org/howtocorrupt.html#_posix_advisory_locks_canceled_by_a_separate_thread_doing_close_



Dan.




How could installation endup

with Hot journals without system experiencing crash or power loss.









--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   6   7   8   9   10   >