Crashes in reorder function when it tries to recurse when using MSVC compiler.
GCC (MingW) works fine.
--
The fact that there's a Highway to Hell but only a Stairway to Heaven says a
lot about anticipated traffic volume.
___
sqlite-users
> According to the MariaDB reference manual, it does not "do anything" with
> references clauses on columns.
Thanks for that hint, I will try again tomorrow because I cannot say for sure
now whether it worked correctly or not. (And I don't have that data available
anymore.)
Hi Dan,
I did not know that. What was the reason that it did not work before 3.30?
Regards,
Hartwig
> Am 2019-10-31 um 19:16 schrieb Dan Kennedy :
>
>
> On 1/11/62 00:32, mailing lists wrote:
>> For normal tables I can use something like:
>>
>> SELECT * FROM Names WHERE FirstNames=? AND or
On 1/11/62 00:32, mailing lists wrote:
For normal tables I can use something like:
SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;
For FTS tables I can only use
SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not
supported (still do not know why)
Is
On Thursday, 31 October, 2019 03:51, Thomas Kurz wrote:
>I experimentally imported the same data into a MariaDB database and tried
>the same operation there (without paying attention to creating any
>indexes, etc.). It takes only a few seconds there.
According to the MariaDB reference manual,
For normal tables I can use something like:
SELECT * FROM Names WHERE FirstNames=? AND or OR LastName=?;
For FTS tables I can only use
SELECT * FROM FTSNames WHERE FirstNames MATCH ? OR LastName MATCH ?; AND is not
supported (still do not know why)
Is there any possibility to use prepared
On Thursday, 31 October, 2019 10:52, Simon Slavin wrote:
>On 31 Oct 2019, at 4:29pm, Keith Medcalf wrote:
>> If the elapsed time is much greater than the sum of user+sys time then
>> I would suspect it is still I/O thrashing (or the machine is really busy
>> doing something else -- those
On 31 Oct 2019, at 4:29pm, Keith Medcalf wrote:
> If the elapsed time is much greater than the sum of user+sys time then I
> would suspect it is still I/O thrashing (or the machine is really busy doing
> something else -- those original timings represent only a 57% process
> dispatch rate,
On Thursday, 31 October, 2019 10:01, Dominique Devienne
wrote:
>On Thu, Oct 31, 2019 at 4:10 PM Thomas Kurz wrote:
>> Yes, but I'd expect that MariaDB has to do the same, but takes clearly
>> less than 1 minute instead of 88 minutes... :confused:
>Are we comparing apples to oranges here?
On Thu, Oct 31, 2019 at 4:10 PM Thomas Kurz wrote:
> Yes, but I'd expect that MariaDB has to do the same, but takes clearly
> less than 1 minute instead of 88 minutes... :confused:
>
Are we comparing apples to oranges here?
SQLite runs on the local machine. While MariaDB is client-server, so
>pragma foreign_keys=on;
>pragma journal_mode=wal;
>.timer on
>delete from dataset;
>--> Run Time: real 197993.218 user 53015.593750 sys 54056.546875
Wow. That is 14 hours each of System and User time and then and additional 24
hours of "waiting for something to happen" time. Do you have
On 31 Oct 2019, at 3:09pm, Thomas Kurz wrote:
> The result of "DELETE FROM dataset" is now 88 minutes
That still seems wrong. I hope the list can help you track down what's going
on.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
It's not gonna account for 88 minutes, but out of curiosity is there a way to
separately report the "delete and commit" time from the "dang this WAL needs to
be checkpointed now" time?
-Original Message-
From: sqlite-users On Behalf Of
Thomas Kurz
Sent: Thursday, October 31, 2019
> Something is wrong. If you did multiple commands like
>
> DELETE FROM MyTable;
>
> to your child tables, they should be fast. Have you run an integrity check ?
I created a new database now, added the missing index "trace(datasetid)" as
suggested by Keith.
The result of "DELETE FROM dataset"
On Thursday, 31 October, 2019 07:17, Jeffrey Walton wrote:
First, your question should go to the sqlite-users mailing list, not the
developers mailing list.
>I'm having trouble binding a parameter for a DELETE. I am trying to
>delete records older than 120 days. Below, I have a table with a
"Four observations that may (or may not) explain the problem:
- dataset.id declared "unique": useless since the primary key is unique
by definition but it may create an index (not checked). A PK is not
"more unique" if an additional "unique" constraint is declared."
Declaring it as both
Hello !
Looking at a thread on https://news.ycombinator.com/item?id=21401198 I
saw a mention to http://www.nsbase.neuts.fr/en/ but could not find the
project source code repository or even if it has one.
So my question is, does anyone know other projects like that (access
like using sqlite)
See https://sqlite.org/limits.html
11. Maximum Number Of Attached Databases
The ATTACH statement is an SQLite extension that allows two or more databases
to be associated to the same database connection and to operate as if they were
a single database. The number of simultaneously attached
Oh, I found the answer.
SQLITE_MAX_ATTACH should be changed.
On Thu, Oct 31, 2019 at 2:40 PM Aydin Ozgur Yagmur
wrote:
> How can it be raised to 125?
> I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
> 125)" but there is no effect.
> I still get "too many attached
On Oct 31, 2019, at 5:40 AM, Aydin Ozgur Yagmur wrote:
>
> How can it be raised to 125?
Define SQLITE_MAX_ATTACHED at build time:
https://www.sqlite.org/limits.html#max_attached
> I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
> 125)" but there is no effect.
How can it be raised to 125?
I tried to change with calling "sqlite3_limit(db_, SQLITE_LIMIT_ATTACHED,
125)" but there is no effect.
I still get "too many attached databases" exception.
On Wed, Oct 30, 2019 at 5:01 PM David Raymond
wrote:
> "There's a limit, somewhere around ten, I think.
If you are deleting all the rows of a table, then you can simply truncate the
table (SQLite will do this). However if the table you are deleting all the
rows from have dependent (child) tables *and* foreign key enforcement is turned
on, then the rows have to be deleted on at a time so that
On Oct 31, 2019, at 3:51 AM, Thomas Kurz wrote:
>
> delete from dataset;
Is that command representative of actual use, or are you deleting all rows just
for the purpose of benchmarking?
I ask because if you’re going to just delete all rows in a table, it’s usually
faster to DROP TABLE and
On 31 Oct 2019, at 11:09am, Thomas Kurz wrote:
> Yes, I already tried deleting from each table individually. It's slow
> everywhere.
Something is wrong. If you did multiple commands like
DELETE FROM MyTable;
to your child tables, they should be fast. Have you run an integrity check ?
> Keith found the answer: you don't have the indexes required to make your
> FOREIGN KEYs run quickly.
Thanks, I will try that.
> If you DELETE FROM the child tables first, do you get fast or slow times ?
Yes, I already tried deleting from each table individually. It's slow
everywhere.
>
Hi,
what I do not really understand is why the statement in step (3) is allowed to
read the database at all and not aborts by a SQLITE_BUSY error. The transaction
in step (2) is modifying the database, in my case the statement is creating the
database. But assume that step (2) is modifying it
Four observations that may (or may not) explain the problem:
- dataset.id declared "unique": useless since the primary key is unique
by definition but it may create an index (not checked). A PK is not
"more unique" if an additional "unique" constraint is declared..
- no "on delete <...>" on
On 31 Oct 2019, at 9:51am, Thomas Kurz wrote:
> I'm using a database with 5 hierarchically strcutured tables using foreign
> keys. The largest table contains about 230'000 entries. My problem is that
> deleting in this database is extremely slow:
Keith found the answer: you don't have the
You have no index on trace(datasetid) ...
You have no index on metadata(datasetid) though the compound indexes in which
datasetid is the first element *should* be sufficent.
.eqp on or .eqp full before issuing the delete command will tell you what the
plan is.
--
The fact that there's a
Yes, please apologize, I indeed forgot to attach the table definitions:
CREATE TABLE dataset (
id INTEGER PRIMARY KEY AUTOINCREMENT
UNIQUE
NOT NULL,
name STRING DEFAULT NULL
COLLATE NOCASE,
On Thu, Oct 31, 2019 at 10:57 AM Thomas Kurz wrote:
> I'm using a database with 5 hierarchically strcutured tables using foreign
> keys. The largest table contains about 230'000 entries. My problem is that
> deleting in this database is extremely slow:
>
> pragma foreign_keys=on;
> pragma
I'm using a database with 5 hierarchically strcutured tables using foreign
keys. The largest table contains about 230'000 entries. My problem is that
deleting in this database is extremely slow:
pragma foreign_keys=on;
pragma journal_mode=wal;
.timer on
delete from dataset;
--> Run Time: real
On Wed, Oct 30, 2019 at 11:12 AM Richard Hipp wrote:
> [...] But using a VIRTUAL generated column as a PRIMARY KEY would be an
> issue.
>
FWIW, I depend on this feature in Oracle, and it works. Both my PRIMARY and
FOREIGN keys
are VIRTUAL columns, which combine two stored columns. Oracle even
33 matches
Mail list logo