[sqlite] mkkeywordhash crashes with MSVC works with GCC commit f12e743e

2019-10-31 Thread Keith Medcalf

Crashes in reorder function when it tries to recurse when using MSVC compiler.  
GCC (MingW) works fine.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Thomas Kurz
> According to the MariaDB reference manual, it does not "do anything" with 
> references clauses on columns.

Thanks for that hint, I will try again tomorrow because I cannot say for sure 
now whether it worked correctly or not. (And I don't have that data available 
anymore.)

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements in FTS MATCH queries

2019-10-31 Thread mailing lists
Hi Dan,

I did not know that. What was the reason that it did not work before 3.30?

Regards,
Hartwig

> Am 2019-10-31 um 19:16 schrieb Dan Kennedy :
> 
> 
> On 1/11/62 00:32, mailing lists wrote:
>> For normal tables I can use something like:
>> 
>> SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;
>> 
>> For FTS tables I can only use
>> 
>> SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is 
>> not supported (still do not know why)
>> 
>> Is there any possibility to use prepared statements for FTS tables with an 
>> AND condition? I like to prevent code injection.
> 
> As of 3.30.0, should work with FTS5.
> 
> Dan.
> 
> 
>> 
>> Regards,
>> Hartwig
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Prepared statements in FTS MATCH queries

2019-10-31 Thread Dan Kennedy


On 1/11/62 00:32, mailing lists wrote:

For normal tables I can use something like:

SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;

For FTS tables I can only use

SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not 
supported (still do not know why)

Is there any possibility to use prepared statements for FTS tables with an AND 
condition? I like to prevent code injection.


As of 3.30.0, should work with FTS5.

Dan.




Regards,
Hartwig

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf

On Thursday, 31 October, 2019 03:51, Thomas Kurz  wrote:

>I experimentally imported the same data into a MariaDB database and tried
>the same operation there (without paying attention to creating any
>indexes, etc.). It takes only a few seconds there.

According to the MariaDB reference manual, it does not "do anything" with 
references clauses on columns.  They are merely for entertainment purposes.  
You have to use the table constraint syntax to declare enforceable foreign key 
constraints, which means you cannot use the same CREATE TABLE syntax for 
MariaDB as for SQLite3.

From https://mariadb.com/kb/en/library/create-table/

Note: MariaDB accepts the REFERENCES clause in ALTER TABLE and CREATE TABLE 
column definitions, but that syntax does nothing. MariaDB simply parses it 
without returning any error or warning, for compatibility with other DBMS's. 
Before MariaDB 10.2.1 this was also true for CHECK constraints. Only the syntax 
for indexes described below creates foreign keys.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Prepared statements in FTS MATCH queries

2019-10-31 Thread mailing lists
For normal tables I can use something like:

SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;

For FTS tables I can only use

SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not 
supported (still do not know why)

Is there any possibility to use prepared statements for FTS tables with an AND 
condition? I like to prevent code injection.

Regards,
Hartwig

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf

On Thursday, 31 October, 2019 10:52, Simon Slavin  wrote:

>On 31 Oct 2019, at 4:29pm, Keith Medcalf  wrote:

>> If the elapsed time is much greater than the sum of user+sys time then
>> I would suspect it is still I/O thrashing (or the machine is really busy
>> doing something else -- those original timings represent only a 57%
>> process dispatch rate, which is pretty low).

> I'm betting it's a rotating hard disk with a slow spin speed, and most of
> the time is spent waiting for the disk to be in the right place.

> As Dominique Devienne spotted, there is now some doubt about the
> comparative figure.  Was the MariaDB filespace hosted on this same hard
> disk ?

While nice, that is irrelevant.  MariaDB is a client/server database so it 
probably uses a database cache that is considerably larger by default than the 
SQLite default.  The SQLite default cache is 2 MB which is 500 pages at the 
default page size of 4K.

>The original post had

>> The largest table contains about 230'000 entries.

> Five tables and eleven indexes.  Sixteen pages and sixteen page indexes.
> Even if you think that all the required indexes for FOREIGN KEY lookup
> are now available, I think that 88 minutes is still longer than SQLite
> should take for anything but a slow disk.

That is why I asked about the cache_size.  If the cache_size has not been 
increased from the default then a large operation will be thrashing and 
spilling pages sto perform a large operation like that.  For I/O intensive 
operations the size of SQLite's page cache makes a huge difference, far more 
than an adequate filesystem or block cache in reducing I/O.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Simon Slavin
On 31 Oct 2019, at 4:29pm, Keith Medcalf  wrote:

> If the elapsed time is much greater than the sum of user+sys time then I 
> would suspect it is still I/O thrashing (or the machine is really busy doing 
> something else -- those original timings represent only a 57% process 
> dispatch rate, which is pretty low).

I'm betting it's a rotating hard disk with a slow spin speed, and most of the 
time is spent waiting for the disk to be in the right place.

As Dominique Devienne spotted, there is now some doubt about the comparative 
figure.  Was the MariaDB filespace hosted on this same hard disk ?

The original post had

> The largest table contains about 230'000 entries.

Five tables and eleven indexes.  Sixteen pages and sixteen page indexes.  Even 
if you think that all the required indexes for FOREIGN KEY lookup are now 
available, I think that 88 minutes is still longer than SQLite should take for 
anything but a slow disk.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf

On Thursday, 31 October, 2019 10:01, Dominique Devienne  
wrote:

>On Thu, Oct 31, 2019 at 4:10 PM Thomas Kurz  wrote:

>> Yes, but I'd expect that MariaDB has to do the same, but takes clearly
>> less than 1 minute instead of 88 minutes... :confused:

>Are we comparing apples to oranges here?

>SQLite runs on the local machine. While MariaDB is client-server, so the
>delete effectively runs on the server.
>Is your MariaDB server local to the same machine you're running SQLite
>on?
>Are both using the same local disks? (as opposed to networked and/or
>different disks)

>Still can't account for 88 minutes though.

Well, 88 minutes is a lot better than 56 hours, of which 24 hours was "waiting" 
time.  How much of the 88 minutes is "waiting" time?  If the elapsed time is 
much greater than the sum of user+sys time then I would suspect it is still I/O 
thrashing (or the machine is really busy doing something else -- those original 
timings represent only a 57% process dispatch rate, which is pretty low).

>You haven't shown the "explain query plan" Keith asked for, which would
>help understand what's going on here.

Well, having added the missing index I doubt that there is anything else to be 
found there.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Dominique Devienne
On Thu, Oct 31, 2019 at 4:10 PM Thomas Kurz  wrote:

> Yes, but I'd expect that MariaDB has to do the same, but takes clearly
> less than 1 minute instead of 88 minutes... :confused:
>

Are we comparing apples to oranges here?

SQLite runs on the local machine. While MariaDB is client-server, so the
delete effectively runs on the server.
Is your MariaDB server local to the same machine you're running SQLite on?
Are both using the same local disks? (as opposed to networked and/or
different disks)

Still can't account for 88 minutes though.
You haven't shown the "explain query plan" Keith asked for, which would
help understand what's going on here.
See https://www.sqlite.org/eqp.html
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf
>pragma foreign_keys=on;
>pragma journal_mode=wal;
>.timer on
>delete from dataset;
>--> Run Time: real 197993.218 user 53015.593750 sys 54056.546875

Wow.  That is 14 hours each of System and User time and then and additional 24 
hours of "waiting for something to happen" time.  Do you have memory to run 
this in?  Have you increased the sqlite cache size because that looks (to me) 
an awful lot like I/O thrashing ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Simon Slavin
On 31 Oct 2019, at 3:09pm, Thomas Kurz  wrote:

> The result of "DELETE FROM dataset" is now 88 minutes

That still seems wrong.  I hope the list can help you track down what's going 
on.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread David Raymond
It's not gonna account for 88 minutes, but out of curiosity is there a way to 
separately report the "delete and commit" time from the "dang this WAL needs to 
be checkpointed now" time?


-Original Message-
From: sqlite-users  On Behalf Of 
Thomas Kurz
Sent: Thursday, October 31, 2019 11:10 AM
To: SQLite mailing list 
Subject: Re: [sqlite] DELETE extremely slow

> Something is wrong.  If you did multiple commands like
>
> DELETE FROM MyTable;
>
> to your child tables, they should be fast.  Have you run an integrity check ?

I created a new database now, added the missing index "trace(datasetid)" as 
suggested by Keith.

The result of "DELETE FROM dataset" is now 88 minutes, which of course is 
better than before where it took hours, but not nearly as quick as I'd expect...

@Warren:
> Is that command representative of actual use, or are you deleting all rows 
> just for the purpose of benchmarking?

Usually I want to delete only several datasets, but not all. I left out the 
where-clause for simplification now.

@Keith:
> and in the face of enforced foreign key constraints will always delete the 
> rows one by each even if dependent (child) tables have no rows.

Yes, but I'd expect that MariaDB has to do the same, but takes clearly less 
than 1 minute instead of 88 minutes... :confused:

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Thomas Kurz
> Something is wrong.  If you did multiple commands like
>
> DELETE FROM MyTable;
>
> to your child tables, they should be fast.  Have you run an integrity check ?

I created a new database now, added the missing index "trace(datasetid)" as 
suggested by Keith.

The result of "DELETE FROM dataset" is now 88 minutes, which of course is 
better than before where it took hours, but not nearly as quick as I'd expect...

@Warren:
> Is that command representative of actual use, or are you deleting all rows 
> just for the purpose of benchmarking?

Usually I want to delete only several datasets, but not all. I left out the 
where-clause for simplification now.

@Keith:
> and in the face of enforced foreign key constraints will always delete the 
> rows one by each even if dependent (child) tables have no rows.

Yes, but I'd expect that MariaDB has to do the same, but takes clearly less 
than 1 minute instead of 88 minutes... :confused:

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] Error 25: column index out of range when binding for a DATETIME

2019-10-31 Thread Keith Medcalf

On Thursday, 31 October, 2019 07:17, Jeffrey Walton  wrote:

First, your question should go to the sqlite-users mailing list, not the 
developers mailing list.

>I'm having trouble binding a parameter for a DELETE. I am trying to
>delete records older than 120 days. Below, I have a table with a
>column dtime of type DATETIME. days is a dirty parameter specified by
>the user.

See https://sqlite.org/datatype3.html

>/* negative for days in the past */
>int days = 120;
>days = -days;

>const char DELETE_STMT[] = "DELETE from blacklist " \
>"WHERE dtime < datetime('now', '? days');";

This statement contains no parameter.  You have a string constant with a ? 
character inside the string.  Parameters go outside of constants, not inside 
them.  Perhaps try something like this (which will work only if days is 
negative):

const char DELETE_STMT[] = "DELETE from blacklist " \
"WHERE dtime < datetime('now', ? || ' days');";

You would not expect this to work would you:

 int days = -120;
 printf("Days = days\n");

>rc = sqlite3_prepare_v2(conn, DELETE_STMT, -1, , NULL);
>if (!SQLITE_SUCCESS(rc))
>{
>log_error("Failed to prepare query, Error (%d): %s\n",
>rc, sqlite3_errmsg(conn));
>errs++; goto finish;
>}
>
>rc = sqlite3_bind_int(stmt, 1, days);
>if (!SQLITE_SUCCESS(rc))
>{
>log_error("Failed to bind days, Error (%d): %s\n",
>rc, sqlite3_errmsg(conn));
>errs++; goto finish;
>}
>
-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread David Raymond
"Four observations that may (or may not) explain the problem:
- dataset.id declared "unique": useless since the primary key is unique 
by definition but it may create an index (not checked). A PK is not 
"more unique" if an additional "unique" constraint is declared."

Declaring it as both "primary key" and "unique" makes an extra (duplicate) 
index, yes. With the added uniqueness checking on the duplicate index as well.



SQLite version 3.29.0 2019-07-10 17:32:03
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.

sqlite> create table foo1 (pk integer primary key);

sqlite> create table foo2(pk integer primary key unique);

sqlite> select * from sqlite_master;
type|name|tbl_name|rootpage|sql
table|foo1|foo1|2|CREATE TABLE foo1 (pk integer primary key)
table|foo2|foo2|3|CREATE TABLE foo2(pk integer primary key unique)
index|sqlite_autoindex_foo2_1|foo2|4|

sqlite> explain insert into foo1 values (?);
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 13000  Start at 13
1 OpenWrite  0 2 0 1  00  root=2 iDb=0; foo1
2 Variable   1 1 000  r[1]=parameter(1,)
3 NotNull1 5 000  if r[1]!=NULL goto 5
4 NewRowid   0 1 000  r[1]=rowid
5 MustBeInt  1 0 000
6 SoftNull   2 0 000  r[2]=NULL
7 Noop   0 0 000  uniqueness check for 
ROWID
8 NotExists  0 10100  intkey=r[1]
9 Halt   1555  2 0 foo1.pk02
10MakeRecord 2 1 3 D  00  r[3]=mkrec(r[2])
11Insert 0 3 1 foo1   31  intkey=r[1] data=r[3]
12Halt   0 0 000
13Transaction0 1 2 0  01  usesStmtJournal=0
14Goto   0 1 000

sqlite> explain insert into foo2 values (?);
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 22000  Start at 22
1 OpenWrite  0 3 0 1  00  root=3 iDb=0; foo2
2 OpenWrite  1 4 0 k(1,)  00  root=4 iDb=0; 
sqlite_autoindex_foo2_1
3 Variable   1 1 000  r[1]=parameter(1,)
4 NotNull1 6 000  if r[1]!=NULL goto 6
5 NewRowid   0 1 000  r[1]=rowid
6 MustBeInt  1 0 000
7 SoftNull   2 0 000  r[2]=NULL
8 Noop   0 0 000  uniqueness check for 
ROWID
9 NotExists  0 11100  intkey=r[1]
10Halt   1555  2 0 foo2.pk02
11Affinity   2 1 0 D  00  affinity(r[2])
12Noop   0 0 000  uniqueness check for 
sqlite_autoindex_foo2_1
13SCopy  1 4 000  r[4]=r[1]; pk
14IntCopy1 5 000  r[5]=r[1]; rowid
15MakeRecord 4 2 300  r[3]=mkrec(r[4..5]); 
for sqlite_autoindex_foo2_1
16NoConflict 1 184 1  00  key=r[4]
17Halt   2067  2 0 foo2.pk02
18MakeRecord 2 1 600  r[6]=mkrec(r[2])
19IdxInsert  1 3 4 1  10  key=r[3]
20Insert 0 6 1 foo2   31  intkey=r[1] data=r[6]
21Halt   0 0 000
22Transaction0 1 2 0  01  usesStmtJournal=0
23Goto   0 1 000

sqlite>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Does someone know any project like nsbase ?

2019-10-31 Thread Domingo Alvarez Duarte

Hello !

Looking at a thread on https://news.ycombinator.com/item?id=21401198 I 
saw a mention to http://www.nsbase.neuts.fr/en/ but could not find the 
project source code repository or even if it has one.


So my question is, does anyone know other projects like that (access 
like using sqlite) ?


Cheers !

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple files for a single SQLite database

2019-10-31 Thread Keith Medcalf

See https://sqlite.org/limits.html

11. Maximum Number Of Attached Databases

The ATTACH statement is an SQLite extension that allows two or more databases 
to be associated to the same database connection and to operate as if they were 
a single database. The number of simultaneously attached databases is limited 
to SQLITE_MAX_ATTACHED which is set to 10 by default. The maximum number of 
attached databases cannot be increased above 125.

The maximum number of attached databases can be lowered at run-time using the 
sqlite3_limit(db,SQLITE_LIMIT_ATTACHED,size) interface.

So you have to compile a version of SQLite3 with a larger maximum limit.  At 
runtime you cannot raise the limit above the default set by the 
SQLITE_MAX_ATTACHED compile-time parameter, you can only reduce the limit.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Aydin Ozgur Yagmur
>Sent: Thursday, 31 October, 2019 05:40
>To: SQLite mailing list 
>Subject: Re: [sqlite] Multiple files for a single SQLite database
>
>How can it be raised to 125?
>I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
>125)" but there is no effect.
>I still get "too many attached databases" exception.
>
>
>On Wed, Oct 30, 2019 at 5:01 PM David Raymond 
>wrote:
>
>> "There's a limit, somewhere around ten, I think.  Though you can
>increase
>> it by explicitly messing with limits and compilation options."
>>
>> Default is 10, can be raised to 125.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple files for a single SQLite database

2019-10-31 Thread Aydin Ozgur Yagmur
Oh, I found the answer.
SQLITE_MAX_ATTACH should be changed.


On Thu, Oct 31, 2019 at 2:40 PM Aydin Ozgur Yagmur 
wrote:

> How can it be raised to 125?
> I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
> 125)" but there is no effect.
> I still get "too many attached databases" exception.
>
>
> On Wed, Oct 30, 2019 at 5:01 PM David Raymond 
> wrote:
>
>> "There's a limit, somewhere around ten, I think.  Though you can increase
>> it by explicitly messing with limits and compilation options."
>>
>> Default is 10, can be raised to 125.
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple files for a single SQLite database

2019-10-31 Thread Warren Young
On Oct 31, 2019, at 5:40 AM, Aydin Ozgur Yagmur  wrote:
> 
> How can it be raised to 125?

Define SQLITE_MAX_ATTACHED at build time:

https://www.sqlite.org/limits.html#max_attached

> I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
> 125)" but there is no effect.

That’s only useful for *lowering* the maximum from the compile time 
SQLITE_MAX_ATTACHED value.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple files for a single SQLite database

2019-10-31 Thread Aydin Ozgur Yagmur
How can it be raised to 125?
I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
125)" but there is no effect.
I still get "too many attached databases" exception.


On Wed, Oct 30, 2019 at 5:01 PM David Raymond 
wrote:

> "There's a limit, somewhere around ten, I think.  Though you can increase
> it by explicitly messing with limits and compilation options."
>
> Default is 10, can be raised to 125.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf

If you are deleting all the rows of a table, then you can simply truncate the 
table (SQLite will do this).  However if the table you are deleting all the 
rows from have dependent (child) tables *and* foreign key enforcement is turned 
on, then the rows have to be deleted on at a time so that the foreign key 
constraints can be checked, although if all the child tables have no rows you 
should be able to just truncate the parent.  SQLite does not perform that 
optimization and in the face of enforced foreign key constraints will always 
delete the rows one by each even if dependent (child) tables have no rows.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Thomas Kurz
>Sent: Thursday, 31 October, 2019 05:10
>To: SQLite mailing list 
>Subject: Re: [sqlite] DELETE extremely slow
>
>> Keith found the answer: you don't have the indexes required to make
>your FOREIGN KEYs run quickly.
>
>Thanks, I will try that.
>
>> If you DELETE FROM the child tables first, do you get fast or slow
>times ?
>
>Yes, I already tried deleting from each table individually. It's slow
>everywhere.
>
>> Thee way you're doing it involves a lot of steps as SQlite works its
>way through the parent table, deletes one row from that, and cascades
>through the other tables, looking for and deleting related rows from
>those.
>
>Ok, I might have errors in my declarations, but SQLite seems to have
>problems as well, because MariaDB (without any explicit index defintion!)
>handles the same deletion within seconds...
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Warren Young
On Oct 31, 2019, at 3:51 AM, Thomas Kurz  wrote:
> 
> delete from dataset;

Is that command representative of actual use, or are you deleting all rows just 
for the purpose of benchmarking?

I ask because if you’re going to just delete all rows in a table, it’s usually 
faster to DROP TABLE and then CREATE TABLE.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Simon Slavin
On 31 Oct 2019, at 11:09am, Thomas Kurz  wrote:

> Yes, I already tried deleting from each table individually. It's slow 
> everywhere.

Something is wrong.  If you did multiple commands like

DELETE FROM MyTable;

to your child tables, they should be fast.  Have you run an integrity check ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Thomas Kurz
> Keith found the answer: you don't have the indexes required to make your 
> FOREIGN KEYs run quickly.

Thanks, I will try that.

> If you DELETE FROM the child tables first, do you get fast or slow times ?

Yes, I already tried deleting from each table individually. It's slow 
everywhere.

> Thee way you're doing it involves a lot of steps as SQlite works its way 
> through the parent table, deletes one row from that, and cascades through the 
> other tables, looking for and deleting related rows from those.

Ok, I might have errors in my declarations, but SQLite seems to have problems 
as well, because MariaDB (without any explicit index defintion!) handles the 
same deletion within seconds...

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_ERROR instead of SQLITE_BUSY or clarification of busy state errors

2019-10-31 Thread mailing lists
Hi, 

what I do not really understand is why the statement in step (3) is allowed to 
read the database at all and not aborts by a SQLITE_BUSY error. The transaction 
in step (2) is modifying the database, in my case the statement is creating the 
database. But assume that step (2) is modifying it only, so step (3) - that 
tries to read the database during modification - is potentially incorrect 
anyway. 

Nevertheless, it is probably just a misinterpretation of the error messages 
from my side.

Regards,
Hardy

> Am 2019-10-31 um 01:53 schrieb Keith Medcalf :
> 
> 
> On Wednesday, 30 October, 2019 16:33, mailing lists  
> wrote:
> 
>> I face the following issue:
> 
>> 1) SQLite has been compiled with SQLITE_THREADSAFE=1 and
>> SQLITE_DEFAULT_SYNCHRONOUS=3
>> 2) I am opening in a thread a new database (standard journal mode) and
>> creating some tables, indices etc. (explicit transaction)
>> 3) while creating the database a new database connection (read only) is
>> established in another thread that tries to get some data by a prepared
>> statement.
> 
>> In this second thread (step (3)) I get an SQLITE_ERROR in
>> sqlite3_prepare. I actually expected an SQLITE_BUSY error. Remark: as
>> step (2) is a transaction no tables exist when step (3) starts execution.
> 
> That is because your SQL statement has an error.  By the sounds of it the 
> table(s) you are trying to query do not exist because you have not yet 
> committed the transaction which is creating them.  
> 
> Key hints "new database" (contains nothing), creating tables etc in an 
> explicit transaction, and while creating the database ... preparing a 
> statement that reads the database fails with an error.
> 
>> Is my understanding correct that I only get an SQLITE_BUSY error when
>> actually trying to run a query? In all other cases I should get different
>> error codes, or?
> 
> Yes.
> 
> -- 
> The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
> lot about anticipated traffic volume.
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Jean-Luc Hainaut


Four observations that may (or may not) explain the problem:
- dataset.id declared "unique": useless since the primary key is unique 
by definition but it may create an index (not checked). A PK is not 
"more unique" if an additional "unique" constraint is declared..
- no "on delete <...>" on trace.datasetid; so, default "no action". Is 
it what you want?
- no index on this foreign key, hence potential full scan to identify 
children rows in "trace" (or absence thereof).
- index trace_idx_01 declared twice. Most often once is sufficient, even 
for large tables.


Jean-Luc Hainaut


Yes, please apologize, I indeed forgot to attach the table definitions:

CREATE TABLE dataset (
 id INTEGER  PRIMARY KEY AUTOINCREMENT
 UNIQUE
 NOT NULL,
 name   STRING   DEFAULT NULL
 COLLATE NOCASE,
 is_latest  BOOLEAN  NOT NULL
 DEFAULT 1,
 created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE trace (
 id INTEGER  PRIMARY KEY AUTOINCREMENT
 UNIQUE
 NOT NULL,
 name   STRING   DEFAULT NULL
 COLLATE NOCASE,
 datasetid  INTEGER  REFERENCES dataset (id)
 NOT NULL,
 quantityid INTEGER  REFERENCES quantity (id)
 NOT NULL,
 stored DATETIME DEFAULT NULL,
 created_at DATETIME NOT NULL
 DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX trace_idx_01 ON trace (
 quantityid
);

CREATE INDEX trace_idx_01 ON trace (
 quantityid
);

CREATE TABLE item (
 idINTEGER PRIMARY KEY AUTOINCREMENT
   UNIQUE
   NOT NULL,
 traceid   INTEGER REFERENCES trace (id)
   NOT NULL,
 freq  BIGINT  NOT NULL,
 value REALNOT NULL,
 noiseflag BOOLEAN DEFAULT NULL
);

CREATE INDEX item_idx_01 ON item (
 traceid
);

CREATE TABLE metadata (
 idINTEGER PRIMARY KEY AUTOINCREMENT
   UNIQUE
   NOT NULL,
 parameter STRING  NOT NULL
   COLLATE NOCASE,
 value STRING  NOT NULL
   COLLATE NOCASE,
 datasetid INTEGER DEFAULT NULL
   REFERENCES dataset (id),
 traceid   INTEGER DEFAULT NULL
   REFERENCES trace (id),
 itemidINTEGER DEFAULT NULL
   REFERENCES item (id)
);

CREATE INDEX metadata_idx_01 ON metadata (
 parameter,
 value,
 datasetid,
 traceid,
 itemid
);

CREATE INDEX metadata_idx_02 ON metadata (
 datasetid,
 traceid
);

CREATE INDEX metadata_idx_03 ON metadata (
 traceid
);

CREATE INDEX metadata_idx_04 ON metadata (
 datasetid,
 itemid
);

CREATE INDEX metadata_idx_05 ON metadata (
 traceid,
 itemid
);

CREATE INDEX metadata_idx_06 ON metadata (
 itemid
);

CREATE INDEX metadata_idx_07 ON metadata (
 datasetid,
 parameter
);

CREATE INDEX metadata_idx_08 ON metadata (
 traceid,
 parameter
);

CREATE INDEX metadata_idx_09 ON metadata (
 parameter,
 traceid
);

CREATE INDEX metadata_idx_10 ON metadata (
 parameter,
 datasetid,
 traceid,
 itemid
);

CREATE TABLE quantity (
 id INTEGER PRIMARY KEY AUTOINCREMENT
UNIQUE
NOT NULL,
 name   STRING  NOT NULL,
 unit   STRING  NOT NULL,
 sumrule[SMALLINT UNSIGNED] DEFAULT NULL,
 created_at DATETIMEDEFAULT CURRENT_TIMESTAMP,
 UNIQUE (
 name,
 unit,
 sumrule
 )
 ON CONFLICT IGNORE
);





- Original Message -
From: Dominique Devienne 
To: SQLite mailing list 
Sent: Thursday, October 31, 2019, 11:06:07
Subject: [sqlite] DELETE extremely slow

On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz  wrote:


I'm using a database with 5 hierarchically strcutured tables using foreign
keys. The largest table contains about 230'000 entries. My problem is that
deleting in this database is extremely slow:
pragma foreign_keys=on;
pragma journal_mode=wal;
.timer on
delete from dataset;
--> Run Time: real 197993.218 user 53015.593750 sys 54056.546875
I experimentally imported the same data into a MariaDB database and tried
the same operation there (without paying attention to creating any indexes,
etc.). It takes only a few seconds there.
Is there something I can check or do to improve deletion speed?


You're not describing the schema enough IMHO.
Is dataset the "top-most" table, containing the "parent" rows all other
tables references (directly or indirectly),
with all FKs having ON DELETE CASCADE?

If that's the case, without some kind of optimization in SQLite, when the
first parent row is deleted,
it triggers a cascade of deletes in "child" tables, looking for 

Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Simon Slavin
On 31 Oct 2019, at 9:51am, Thomas Kurz  wrote:

> I'm using a database with 5 hierarchically strcutured tables using foreign 
> keys. The largest table contains about 230'000 entries. My problem is that 
> deleting in this database is extremely slow:

Keith found the answer: you don't have the indexes required to make your 
FOREIGN KEYs run quickly.  But I have a question:

If you DELETE FROM the child tables first, do you get fast or slow times ?

Thee way you're doing it involves a lot of steps as SQlite works its way 
through the parent table, deletes one row from that, and cascades through the 
other tables, looking for and deleting related rows from those.

SQLite has an optimization for DELETE FROM without a WHERE clause.  But I don't 
know if it's smart enough to realise that if you do that to a parent table, you 
can also do it to all child tables.  Or perhaps its not true and there might be 
child rows with no parent row, and SQLite is making sure it preserves them.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Keith Medcalf

You have no index on trace(datasetid) ... 
You have no index on metadata(datasetid) though the compound indexes in which 
datasetid is the first element *should* be sufficent.

.eqp on or .eqp full before issuing the delete command will tell you what the 
plan is.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Thomas Kurz
>Sent: Thursday, 31 October, 2019 04:25
>To: SQLite mailing list 
>Subject: Re: [sqlite] DELETE extremely slow
>
>Yes, please apologize, I indeed forgot to attach the table definitions:
>
>CREATE TABLE dataset (
>id INTEGER  PRIMARY KEY AUTOINCREMENT
>UNIQUE
>NOT NULL,
>name   STRING   DEFAULT NULL
>COLLATE NOCASE,
>is_latest  BOOLEAN  NOT NULL
>DEFAULT 1,
>created_at DATETIME DEFAULT CURRENT_TIMESTAMP
>);
>
>CREATE TABLE trace (
>id INTEGER  PRIMARY KEY AUTOINCREMENT
>UNIQUE
>NOT NULL,
>name   STRING   DEFAULT NULL
>COLLATE NOCASE,
>datasetid  INTEGER  REFERENCES dataset (id)
>NOT NULL,
>quantityid INTEGER  REFERENCES quantity (id)
>NOT NULL,
>stored DATETIME DEFAULT NULL,
>created_at DATETIME NOT NULL
>DEFAULT CURRENT_TIMESTAMP
>);
>
>CREATE INDEX trace_idx_01 ON trace (
>quantityid
>);
>
>CREATE INDEX trace_idx_01 ON trace (
>quantityid
>);
>
>CREATE TABLE item (
>idINTEGER PRIMARY KEY AUTOINCREMENT
>  UNIQUE
>  NOT NULL,
>traceid   INTEGER REFERENCES trace (id)
>  NOT NULL,
>freq  BIGINT  NOT NULL,
>value REALNOT NULL,
>noiseflag BOOLEAN DEFAULT NULL
>);
>
>CREATE INDEX item_idx_01 ON item (
>traceid
>);
>
>CREATE TABLE metadata (
>idINTEGER PRIMARY KEY AUTOINCREMENT
>  UNIQUE
>  NOT NULL,
>parameter STRING  NOT NULL
>  COLLATE NOCASE,
>value STRING  NOT NULL
>  COLLATE NOCASE,
>datasetid INTEGER DEFAULT NULL
>  REFERENCES dataset (id),
>traceid   INTEGER DEFAULT NULL
>  REFERENCES trace (id),
>itemidINTEGER DEFAULT NULL
>  REFERENCES item (id)
>);
>
>CREATE INDEX metadata_idx_01 ON metadata (
>parameter,
>value,
>datasetid,
>traceid,
>itemid
>);
>
>CREATE INDEX metadata_idx_02 ON metadata (
>datasetid,
>traceid
>);
>
>CREATE INDEX metadata_idx_03 ON metadata (
>traceid
>);
>
>CREATE INDEX metadata_idx_04 ON metadata (
>datasetid,
>itemid
>);
>
>CREATE INDEX metadata_idx_05 ON metadata (
>traceid,
>itemid
>);
>
>CREATE INDEX metadata_idx_06 ON metadata (
>itemid
>);
>
>CREATE INDEX metadata_idx_07 ON metadata (
>datasetid,
>parameter
>);
>
>CREATE INDEX metadata_idx_08 ON metadata (
>traceid,
>parameter
>);
>
>CREATE INDEX metadata_idx_09 ON metadata (
>parameter,
>traceid
>);
>
>CREATE INDEX metadata_idx_10 ON metadata (
>parameter,
>datasetid,
>traceid,
>itemid
>);
>
>CREATE TABLE quantity (
>id INTEGER PRIMARY KEY AUTOINCREMENT
>   UNIQUE
>   NOT NULL,
>name   STRING  NOT NULL,
>unit   STRING  NOT NULL,
>sumrule[SMALLINT UNSIGNED] DEFAULT NULL,
>created_at DATETIMEDEFAULT CURRENT_TIMESTAMP,
>UNIQUE (
>name,
>unit,
>sumrule
>)
>ON CONFLICT IGNORE
>);
>
>
>
>
>
>- Original Message -
>From: Dominique Devienne 
>To: SQLite mailing list 
>Sent: Thursday, October 31, 2019, 11:06:07
>Subject: [sqlite] DELETE extremely slow
>
>On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz 
>wrote:
>
>> I'm using a database with 5 hierarchically strcutured tables using
>foreign
>> keys. The largest table contains about 230'000 entries. My problem is
>that
>> deleting in this database is extremely slow:
>
>> pragma foreign_keys=on;
>> pragma journal_mode=wal;
>> .timer on
>> delete from dataset;
>> --> Run Time: real 197993.218 user 53015.593750 sys 54056.546875
>
>> I experimentally imported the same data into a MariaDB database and
>tried
>> the same operation there (without paying attention to creating any
>indexes,
>> etc.). It takes only a few seconds there.
>
>> Is there something I can check or do to improve deletion speed?
>
>
>You're not describing the schema enough IMHO.
>Is dataset the "top-most" table, containing the "parent" rows all other
>tables references (directly or indirectly),
>with all FKs having ON DELETE CASCADE?
>
>If that's the case, without some kind of optimization in 

Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Thomas Kurz
Yes, please apologize, I indeed forgot to attach the table definitions:

CREATE TABLE dataset (
id INTEGER  PRIMARY KEY AUTOINCREMENT
UNIQUE
NOT NULL,
name   STRING   DEFAULT NULL
COLLATE NOCASE,
is_latest  BOOLEAN  NOT NULL
DEFAULT 1,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

CREATE TABLE trace (
id INTEGER  PRIMARY KEY AUTOINCREMENT
UNIQUE
NOT NULL,
name   STRING   DEFAULT NULL
COLLATE NOCASE,
datasetid  INTEGER  REFERENCES dataset (id) 
NOT NULL,
quantityid INTEGER  REFERENCES quantity (id) 
NOT NULL,
stored DATETIME DEFAULT NULL,
created_at DATETIME NOT NULL
DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX trace_idx_01 ON trace (
quantityid
);

CREATE INDEX trace_idx_01 ON trace (
quantityid
);

CREATE TABLE item (
idINTEGER PRIMARY KEY AUTOINCREMENT
  UNIQUE
  NOT NULL,
traceid   INTEGER REFERENCES trace (id) 
  NOT NULL,
freq  BIGINT  NOT NULL,
value REALNOT NULL,
noiseflag BOOLEAN DEFAULT NULL
);

CREATE INDEX item_idx_01 ON item (
traceid
);

CREATE TABLE metadata (
idINTEGER PRIMARY KEY AUTOINCREMENT
  UNIQUE
  NOT NULL,
parameter STRING  NOT NULL
  COLLATE NOCASE,
value STRING  NOT NULL
  COLLATE NOCASE,
datasetid INTEGER DEFAULT NULL
  REFERENCES dataset (id),
traceid   INTEGER DEFAULT NULL
  REFERENCES trace (id),
itemidINTEGER DEFAULT NULL
  REFERENCES item (id) 
);

CREATE INDEX metadata_idx_01 ON metadata (
parameter,
value,
datasetid,
traceid,
itemid
);

CREATE INDEX metadata_idx_02 ON metadata (
datasetid,
traceid
);

CREATE INDEX metadata_idx_03 ON metadata (
traceid
);

CREATE INDEX metadata_idx_04 ON metadata (
datasetid,
itemid
);

CREATE INDEX metadata_idx_05 ON metadata (
traceid,
itemid
);

CREATE INDEX metadata_idx_06 ON metadata (
itemid
);

CREATE INDEX metadata_idx_07 ON metadata (
datasetid,
parameter
);

CREATE INDEX metadata_idx_08 ON metadata (
traceid,
parameter
);

CREATE INDEX metadata_idx_09 ON metadata (
parameter,
traceid
);

CREATE INDEX metadata_idx_10 ON metadata (
parameter,
datasetid,
traceid,
itemid
);

CREATE TABLE quantity (
id INTEGER PRIMARY KEY AUTOINCREMENT
   UNIQUE
   NOT NULL,
name   STRING  NOT NULL,
unit   STRING  NOT NULL,
sumrule[SMALLINT UNSIGNED] DEFAULT NULL,
created_at DATETIMEDEFAULT CURRENT_TIMESTAMP,
UNIQUE (
name,
unit,
sumrule
)
ON CONFLICT IGNORE
);





- Original Message - 
From: Dominique Devienne 
To: SQLite mailing list 
Sent: Thursday, October 31, 2019, 11:06:07
Subject: [sqlite] DELETE extremely slow

On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz  wrote:

> I'm using a database with 5 hierarchically strcutured tables using foreign
> keys. The largest table contains about 230'000 entries. My problem is that
> deleting in this database is extremely slow:

> pragma foreign_keys=on;
> pragma journal_mode=wal;
> .timer on
> delete from dataset;
> --> Run Time: real 197993.218 user 53015.593750 sys 54056.546875

> I experimentally imported the same data into a MariaDB database and tried
> the same operation there (without paying attention to creating any indexes,
> etc.). It takes only a few seconds there.

> Is there something I can check or do to improve deletion speed?


You're not describing the schema enough IMHO.
Is dataset the "top-most" table, containing the "parent" rows all other
tables references (directly or indirectly),
with all FKs having ON DELETE CASCADE?

If that's the case, without some kind of optimization in SQLite, when the
first parent row is deleted,
it triggers a cascade of deletes in "child" tables, looking for rows using
the parent row. So if your FKs
are not indexed for those column(s), that's a full table scan each time...
That's "depth first".

By analyzing the graph of FKs and their ON DELETE CASCADE state, and in the
specific case of
fully deleting the "main parent table", SQLite could decide switch to a
smarter "breadth first" delete,
but I suspect it's not a compelling enough use-case for Richard to invest
time on this.

Try indexing your FKs, and see what happens. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org

Re: [sqlite] DELETE extremely slow

2019-10-31 Thread Dominique Devienne
On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz  wrote:

> I'm using a database with 5 hierarchically strcutured tables using foreign
> keys. The largest table contains about 230'000 entries. My problem is that
> deleting in this database is extremely slow:
>
> pragma foreign_keys=on;
> pragma journal_mode=wal;
> .timer on
> delete from dataset;
> --> Run Time: real 197993.218 user 53015.593750 sys 54056.546875
>
> I experimentally imported the same data into a MariaDB database and tried
> the same operation there (without paying attention to creating any indexes,
> etc.). It takes only a few seconds there.
>
> Is there something I can check or do to improve deletion speed?
>

You're not describing the schema enough IMHO.
Is dataset the "top-most" table, containing the "parent" rows all other
tables references (directly or indirectly),
with all FKs having ON DELETE CASCADE?

If that's the case, without some kind of optimization in SQLite, when the
first parent row is deleted,
it triggers a cascade of deletes in "child" tables, looking for rows using
the parent row. So if your FKs
are not indexed for those column(s), that's a full table scan each time...
That's "depth first".

By analyzing the graph of FKs and their ON DELETE CASCADE state, and in the
specific case of
fully deleting the "main parent table", SQLite could decide switch to a
smarter "breadth first" delete,
but I suspect it's not a compelling enough use-case for Richard to invest
time on this.

Try indexing your FKs, and see what happens. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DELETE extremely slow

2019-10-31 Thread Thomas Kurz
I'm using a database with 5 hierarchically strcutured tables using foreign 
keys. The largest table contains about 230'000 entries. My problem is that 
deleting in this database is extremely slow:

pragma foreign_keys=on;
pragma journal_mode=wal;
.timer on
delete from dataset;
--> Run Time: real 197993.218 user 53015.593750 sys 54056.546875

I experimentally imported the same data into a MariaDB database and tried the 
same operation there (without paying attention to creating any indexes, etc.). 
It takes only a few seconds there.

Is there something I can check or do to improve deletion speed?

Kind regards,
Thomas

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Generated columns in primary keys, indexes, and constraints

2019-10-31 Thread Dominique Devienne
On Wed, Oct 30, 2019 at 11:12 AM Richard Hipp  wrote:

> [...] But using a VIRTUAL generated column as a PRIMARY KEY would be an
> issue.
>

FWIW, I depend on this feature in Oracle, and it works. Both my PRIMARY and
FOREIGN keys
are VIRTUAL columns, which combine two stored columns. Oracle even allows
to ALTER TABLE
to go from a mode where PKs and FKs are scalar and using stored columns,
and another where
those PKs and FKs are switched to (still scalar) constraints but using
these VIRTUAL columns instead.
The big advantage of this is that the upgrade (one way or another)
"theoretically" writes nothing on disk
since the stored columns do not change at all, only VIRTUALs and
CONSTRAINTS are added/removed.

An alternative to this design would be to use composite PKs and FKs, but
our app depends heavily
on efficient handling of SELECT ... WHERE pk_col in (:1),  with :1 being
bound to a collection/array
of PK values, and I could not figure out a way to do the equivalent with a
composite PK. (and that's
also one reason why I've requested several times for a way to bind
collections in SQLite, in a way
that's not dependent on carray() which I'm not fond of since does not
handle lifetime)

So maybe it's not the best reason for VIRTUAL columns being allowed as PKs,
but that's my $0.02. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users