Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Richard Hipp
On 5/3/19, Manuel Rigger  wrote:
> I just tried the examples on the int-real branch,
> and it seems that they do not cause any errors.
>

The int-real branch has now been fully tested and merged to trunk.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Manuel Rigger
Unfortunately, I deleted some other similar cases that I found before
reading your reply. I'll gather any new cases (I assume you are talking
about cases that the int-real branch should address).

Best,
Manuel

On Fri, May 3, 2019 at 2:34 PM Richard Hipp  wrote:

> On 5/3/19, Manuel Rigger  wrote:
> > I just tried the examples on the int-real branch,
> > and it seems that they do not cause any errors.
>
> That is good to hear.
>
> It would be helpful if you could gather together all of your test
> cases and send them to us, so that we could add them to the SQLite
> regression tests.
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query planner: Covering index not chosen over primary key

2019-05-03 Thread Jen Pollock
Thank you for the suggestion!

The actual schema & query are a good deal more complicated, and I'm not
looking for general optimization help with them right now.

Jen Pollock

On Fri, May 03, 2019 at 10:11:04PM +0100, Simon Slavin wrote:
> On 3 May 2019, at 9:34pm, Jen Pollock  wrote:
> 
> >  SELECT filename
> >  FROM images
> >JOIN embedded_files ON images.file_id == embedded_files.id
> >  WHERE type == 'png';
> 
> Try this:
> 
> CREATE INDEX images (type, file_id);
> ANALYZE;
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query planner: Covering index not chosen over primary key

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 9:34pm, Jen Pollock  wrote:

>  SELECT filename
>  FROM images
>JOIN embedded_files ON images.file_id == embedded_files.id
>  WHERE type == 'png';

Try this:

CREATE INDEX images (type, file_id);
ANALYZE;
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query planner: Covering index not chosen over primary key

2019-05-03 Thread Richard Hipp
On 5/3/19, Jen Pollock  wrote:
> I assume the problem here is that the primary key is usually a weird
> thing to index. I can definitely work around this, but I thought it
> might be worth reporting as something that could perhaps be improved in
> the query planner.

Thank you.  I have your request.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query planner: Covering index not chosen over primary key

2019-05-03 Thread Jen Pollock
I have a database with a schema roughly like this:

  CREATE TABLE embedded_files(
id INTEGER PRIMARY KEY,
filename TEXT,
data BLOB
  );
  CREATE TABLE images(
id INTEGER PRIMARY KEY,
file_id INTEGER,
type TEXT,
FOREIGN KEY(file_id) REFERENCES embedded_files(id)
  );

The following query is slow:

  SELECT filename
  FROM images
JOIN embedded_files ON images.file_id == embedded_files.id
  WHERE type == 'png';

Part of the problem is that many of the values in embedded_files.data
are quite large. I tried to improve the query's performance by creating a
covering index:

  CREATE INDEX embedded_files_id_filename ON embedded_files(id, filename);

However, the query planner won't use this index unless I force it to
with INDEXED BY. Forcing it to use the index does speed up the query.

I assume the problem here is that the primary key is usually a weird
thing to index. I can definitely work around this, but I thought it
might be worth reporting as something that could perhaps be improved in
the query planner.

Jen Pollock

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Error when renaming a table when an invalid view exists in the schema

2019-05-03 Thread Keith Medcalf

Use PRAGMA LEGACY_ALTER_TABLE=ON;

The "Legacy alter table" does not require the database to be "valid/consistent" 
after executing the "alter table" command.  The non-legacy (default) mode 
requires that the database be "transformed" into a "valid/consistent" state in 
order for the alter table command to be processed.

A side effect of this is that if the database was invalid BEFORE you issue the 
alter table command, and it is still invalid after, that in the "validity" 
checking mode (the default), the alter table command will not be processed 
(since it will throw an error that the transformation did not result in a 
"valid" schema).

Correctly, you should either (a) drop view v1, or (b) create table t1 so that 
the database schema is consistent before you attempt to use ALTER TABLE. (Or, 
if you know that your schema is invalid, you can turn off validity checking 
with pragma LEGACY_ALTER_TABLE).  Since creating a view is nothing more than 
storing a statement in the database, it is not checked when you create a view 
or drop a table that the schema is still valid (otherwise you would get a 
message of the form "Cannot drop table t1 because it is referenced in a view" 
or mayhaps view v1 would be dropped automatically).

Perhaps a setting "ALLOW_INVALID_SCHEMA" needs to be added such that after each 
DDL statement the schema is checked for validity and if it is not valid then 
the DDL is tossed with an error (such as occurs in SQLFat databases)?

The only possible inconsistency that I see is that:

sqlite> create view v1 as select * from t1;
sqlite> create table t2(x);
sqlite> alter table t2 rename to t1;
Error: error in view v1: no such table: main.t1

which means that you must use legacy_alter_table in order to be able to fix 
typo's ... The rename is prohibited because the schema is inconsistent BEFORE 
the change, even though it would be consistent AFTER the change ...

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Tom Bassel
>Sent: Friday, 3 May, 2019 12:51
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] Error when renaming a table when an invalid view
>exists in the schema
>
>Hello again everyone,
>
>I went through the ALTER TABLE docs but could not find anything
>describing this behavior:
>
>
>SQLite version 3.29.0 2019-04-27 20:30:19
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> create table t1(f1);
>sqlite> create view v1 as select * from t1;
>sqlite> drop table t1;
>sqlite> create table t2(f1);
>sqlite> .schema
>CREATE VIEW v1 as select * from t1;
>CREATE TABLE t2(f1);
>sqlite> alter table t2 rename to t20;
>Error: error in view v1: no such table: main.t1
>
>
>That is, if an invalid view exists in the schema, then any attempt to
>alter the name of another unrelated table will fail with the error
>message that there is a view that references a different table that
>does not exist.
>
>It also occurs in 3.27 but I downloaded and tried it on 3.29 just in
>case something changed recently.
>
>Thanks
>Tom
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Error when renaming a table when an invalid view exists in the schema

2019-05-03 Thread Tom Bassel
Hello again everyone,

I went through the ALTER TABLE docs but could not find anything describing this 
behavior:


SQLite version 3.29.0 2019-04-27 20:30:19
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table t1(f1);
sqlite> create view v1 as select * from t1;
sqlite> drop table t1;
sqlite> create table t2(f1);
sqlite> .schema
CREATE VIEW v1 as select * from t1;
CREATE TABLE t2(f1);
sqlite> alter table t2 rename to t20;
Error: error in view v1: no such table: main.t1


That is, if an invalid view exists in the schema, then any attempt to alter the 
name of another unrelated table will fail with the error message that there is 
a view that references a different table that does not exist.

It also occurs in 3.27 but I downloaded and tried it on 3.29 just in case 
something changed recently.

Thanks
Tom
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Richard Hipp
On 5/3/19, Jens Alfke  wrote:
>
> Thanks, Simon. So the rekey is expected to double the database file size?

Database size should be the same before and after rekey.  The rekey
operation will generate journal file entries that are about the same
size as the original database (actually slightly larger due to extra
per-page header information stored in the journal) but that temporary
space is transient. It is only used in the event of a system crash
occurs in the middle of a rekey. Once the operation completes, total
disk usage reverts to what it was before.  No VACUUM required.

VACUUM is *highly* recommended for security if you take an unencrypted
database that has never before been encrypted and "rekey" it into an
encrypted database.  That is a totally separate issue.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Pasin Suriyentrakorn
Thanks Simon, for the answer. David, sqlite3_rekey_v2() is a function to
change the database encryption key - it's a SEE feature.

On Fri, May 3, 2019 at 9:56 AM Jens Alfke  wrote:

>
>
> > On May 3, 2019, at 8:37 AM, Simon Slavin  wrote:
> >
> > While rekey is working, the database temporarily contains both sets of
> some data.  Automatic crash recovery will see, this, recover the
> unconverted set, and wipe the converted set.  You do not need to take
> special precautions.
>
> Thanks, Simon. So the rekey is expected to double the database file size?
> That’s good to know.
> After the call completes, is the file size back to normal, or is a vacuum
> needed to reclaim disk space?
>
> It would be good to update the SEE docs to describe this, as storage space
> can be a significant issue on a resource-constrained device.
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 5:56pm, Jens Alfke  wrote:

> Thanks, Simon. So the rekey is expected to double the database file size? 
> That’s good to know.

I don't know for sure.  I believe that the rekeying is done block by block 
rather than for the entire database at one time.

> After the call completes, is the file size back to normal, or is a vacuum 
> needed to reclaim disk space?

Free space will be released once rekeying is complete, or the next time the 
database is closed, or something like that.  You are not expected to do VACUUM 
or anything like it.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Jens Alfke


> On May 3, 2019, at 8:37 AM, Simon Slavin  wrote:
> 
> While rekey is working, the database temporarily contains both sets of some 
> data.  Automatic crash recovery will see, this, recover the unconverted set, 
> and wipe the converted set.  You do not need to take special precautions.

Thanks, Simon. So the rekey is expected to double the database file size? 
That’s good to know. 
After the call completes, is the file size back to normal, or is a vacuum 
needed to reclaim disk space?

It would be good to update the SEE docs to describe this, as storage space can 
be a significant issue on a resource-constrained device.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Go & SQLite asserts

2019-05-03 Thread Jens Alfke


> On May 3, 2019, at 3:06 AM, Rowan Worth  wrote:
> 
> But there's a very clear
> convention set out surrounding the use of panic() - it should never form
> part of a module's public API. It's not an error reporting mechanism and
> callers should never be expected to invoke recover() just to use your
> module.


At the risk of going farther OT…

There’s precedent for this: consider what happens in Go if you access a nil 
reference. IIRC (I haven’t used Go much in a few years) it triggers a panic. So 
basically, the “.” operator has a sort of “assert(ref != nil)” in its 
implementation. Again IIRC, the same thing is true of the integer division 
operator when the denominator is zero.

In neither case is this part of an API, it’s very literally a “panic” in the 
usual sense of the word: an escape from an intolerable situation. Exactly the 
same would be true of other situations where an assert() function could be 
used. Of course it shouldn’t be overused, but reserved for a situation where 
either (a) the flow of control would soon panic anyway at the language level 
due to a nil deref or something similar, and an explicit assertion failure just 
makes the failure easier to diagnose; or (b) continuing would have damaging 
effects like corrupting state.

But I’ve learned not to question Go design decisions in their forums; it tends 
to lead to a lecture on why they’re right and I’m wrong. (One of the minor 
reasons I don’t use Go anymore.)

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread J. King
It's part of the encryption extension. The function signature is 
referenced publicly here:



--
J. King

On 2019-05-03 12:04:32, "David Raymond"  
wrote:



Random question from a non-C person: What is sqlite3_rekey_v2()?

I was curious, so looked for it in the docs and don't see it listed in the C 
reference...
https://www.sqlite.org/c3ref/funclist.html
...and it doesn't get any hits when put into the search box for the web page.


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin
Sent: Friday, May 03, 2019 11:38 AM
To: SQLite mailing list
Subject: Re: [sqlite] What could happen if an app crashes while calling 
sqlite3_rekey_v2()?

On 3 May 2019, at 4:15pm, Pasin Suriyentrakorn  wrote:


 What could happen if an app crashes while calling sqlite3_rekey_v2()? Is there 
best practice to safely call sqlite3_rekey_v2()?


While rekey is working, the database temporarily contains both sets of some 
data.  Automatic crash recovery will see, this, recover the unconverted set, 
and wipe the converted set.  You do not need to take special precautions.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread David Raymond
Random question from a non-C person: What is sqlite3_rekey_v2()?

I was curious, so looked for it in the docs and don't see it listed in the C 
reference...
https://www.sqlite.org/c3ref/funclist.html
...and it doesn't get any hits when put into the search box for the web page.


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin
Sent: Friday, May 03, 2019 11:38 AM
To: SQLite mailing list
Subject: Re: [sqlite] What could happen if an app crashes while calling 
sqlite3_rekey_v2()?

On 3 May 2019, at 4:15pm, Pasin Suriyentrakorn  wrote:

> What could happen if an app crashes while calling sqlite3_rekey_v2()? Is 
> there best practice to safely call sqlite3_rekey_v2()?

While rekey is working, the database temporarily contains both sets of some 
data.  Automatic crash recovery will see, this, recover the unconverted set, 
and wipe the converted set.  You do not need to take special precautions.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: seeking information on the throughput requirement using Sqlite

2019-05-03 Thread David Raymond
Which again goes to the hardware question. If your database is on "spinning 
rust" as people call it, you can't commit a transaction (that changed the data) 
any quicker than 2 rotations of the physical disk, because the data needs to 
get synced to the disk before the next write transaction can start. So a 7,200 
rpm drive is hard limited to what, 60 commits per second? (Someone will correct 
me if I'm wrong)

Inside a transaction you can work on the cache and go as fast as you want, it's 
just when you go to save your work at the end that you have to wait.

There is a pragma (synchronous) where you can set it to not wait for a sync to 
complete and to just keep going if you're ok with something being told that it 
committed despite it potentially not actually being saved to disk yet. (Which 
is fine for a lot of things) I _think_ then that there's no delay between write 
transaction A committing and write transaction B starting. (Again, someone will 
correct me if I'm wrong)



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin
Sent: Friday, May 03, 2019 11:34 AM
To: SQLite mailing list
Subject: Re: [sqlite] [EXTERNAL] Re: seeking information on the throughput 
requirement using Sqlite

On 3 May 2019, at 4:26pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> The planning transition rate is 1ms per 34-40Kb data,  we only have one 
> connection trying to write to the database.   If we have two connection, will 
> we running to database concurrency issue?

If you have normal hardware, using one connection to do all your writing will 
be simpler and allow you to use faster settings.  You can have one writing 
connection and many reading connections without slowing down access.  If you 
want to use two writing connections they may have to wait for one-another.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please remove my email address from the list

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 4:34pm, Jean Marcoux  wrote:

[nothing]

Please click on the link at the bottom of every post, including this one, and 
remove yourself.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 4:15pm, Pasin Suriyentrakorn  wrote:

> What could happen if an app crashes while calling sqlite3_rekey_v2()? Is 
> there best practice to safely call sqlite3_rekey_v2()?

While rekey is working, the database temporarily contains both sets of some 
data.  Automatic crash recovery will see, this, recover the unconverted set, 
and wipe the converted set.  You do not need to take special precautions.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Please remove my email address from the list

2019-05-03 Thread Jean Marcoux

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: seeking information on the throughput requirement using Sqlite

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 4:26pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> The planning transition rate is 1ms per 34-40Kb data,  we only have one 
> connection trying to write to the database.   If we have two connection, will 
> we running to database concurrency issue?

If you have normal hardware, using one connection to do all your writing will 
be simpler and allow you to use faster settings.  You can have one writing 
connection and many reading connections without slowing down access.  If you 
want to use two writing connections they may have to wait for one-another.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite windows function support

2019-05-03 Thread Richard Hipp
On 5/3/19, Chien-Chih Yu  wrote:
> Since all current window functions are "draft version"...

I'm not sure were you are getting this information.  See
https://www.sqlite.org/windowfunctions.html for the official
documentation.

Ahead of each release, we always publish advance copies of the new
website on which *every* page is labeled as "DRAFT".  See
https://www.sqlite.org/draft/windowfunctions.html or even
https://www.sqlite.org/draft/index.html for example.  Are you looking
at one of these pre-release copies of the documentation?  If so, you
can simply remove the "/draft" from the middle of the URL in order to
get the officially released document.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] What could happen if an app crashes while calling sqlite3_rekey_v2()?

2019-05-03 Thread Pasin Suriyentrakorn
Hello,

What could happen if an app crashes while calling sqlite3_rekey_v2()? Is there 
best practice to safely call sqlite3_rekey_v2()?

Thanks,
Pasin
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: seeking information on the throughput requirement using Sqlite

2019-05-03 Thread Zhu, Liang [AUTOSOL/ASSY/US]
The planning transition rate is 1ms per 34-40Kb data,  we only have one 
connection trying to write to the database.   If we have two connection, will 
we running to database concurrency issue?

Thank you,
Liang

-Original Message-
From: sqlite-users  On Behalf Of 
David Raymond
Sent: Friday, May 03, 2019 11:11 AM
To: SQLite mailing list 
Subject: [EXTERNAL] Re: [sqlite] seeking information on the throughput 
requirement using Sqlite

Within a single transaction SQLite can do things very quickly. But a reminder 
that there can be only 1 write transaction happening at a time. So my questions 
are: What is the planned _transaction_ rate? And how many different connections 
will be trying to write at once?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Zhu, Liang [AUTOSOL/ASSY/US]
Sent: Friday, May 03, 2019 10:01 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] seeking information on the throughput requirement using Sqlite

Hi Sqlite experts,

In my current project, we have the need to inserting and deleting data to and 
from the database between 1 and 10 ms for 24/7.   I am seeking the 
clarification on the following questions


  1.  what is the throughput requirements are possible using Sqlite API?, in 
other words, what is the max  speed can my inserting and deleting operation be?
  2.  When we inserting and deleting data at the speed mentioned above, what 
kind database maintenance do we need to do to maintain the performance of the 
database?
  3.  How is constant deleting and insert effect the database performance?


Thank you,

Liang Zhu | Lead Software Engineer | Branson Ultrasonics Emerson Automation 
Solutions | 41 Eagle Road | Danbury, CT 06810 | USA T (203) 796-2235 | F (203) 
796-0380 liang@emerson.com

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorized copying of this message or unauthorized 
distribution of the information contained herein is prohibited.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers=DwIGaQ=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I=xvOxbL_JVO6tRWa--sJ-ZwGPDKJduQVT6rMEECKOzXI=MtBAWF7aqxaFuFE8faLYasnpTy6PDvqy01KYmBrmCSg=
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailinglists.sqlite.org_cgi-2Dbin_mailman_listinfo_sqlite-2Dusers=DwIGaQ=jOURTkCZzT8tVB5xPEYIm3YJGoxoTaQsQPzPKJGaWbo=4Y1ZhFy9bpH-wvkl_u5WMmphQqcZcyyY0DFEINZ4E6I=xvOxbL_JVO6tRWa--sJ-ZwGPDKJduQVT6rMEECKOzXI=MtBAWF7aqxaFuFE8faLYasnpTy6PDvqy01KYmBrmCSg=
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite windows function support

2019-05-03 Thread Chien-Chih Yu
To whom it may concern
We are evaluating the use of SQLite window functions.
Since all current window functions are "draft version", we would like to know 
when these window functions will become stable version.

Thanks
Chien-Chih Yu

This email and any attachments are intended for the sole use of the named 
recipient(s) and contain(s) confidential information that may be proprietary, 
privileged or copyrighted under applicable law. If you are not the intended 
recipient, do not read, copy, or forward this email message or any attachments. 
Delete this email message and any attachments immediately.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] seeking information on the throughput requirement using Sqlite

2019-05-03 Thread David Raymond
Within a single transaction SQLite can do things very quickly. But a reminder 
that there can be only 1 write transaction happening at a time. So my questions 
are: What is the planned _transaction_ rate? And how many different connections 
will be trying to write at once?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Zhu, Liang [AUTOSOL/ASSY/US]
Sent: Friday, May 03, 2019 10:01 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: [sqlite] seeking information on the throughput requirement using Sqlite

Hi Sqlite experts,

In my current project, we have the need to inserting and deleting data to and 
from the database between 1 and 10 ms for 24/7.   I am seeking the 
clarification on the following questions


  1.  what is the throughput requirements are possible using Sqlite API?, in 
other words, what is the max  speed can my inserting and deleting operation be?
  2.  When we inserting and deleting data at the speed mentioned above, what 
kind database maintenance do we need to do to maintain the performance of the 
database?
  3.  How is constant deleting and insert effect the database performance?


Thank you,

Liang Zhu | Lead Software Engineer | Branson Ultrasonics
Emerson Automation Solutions | 41 Eagle Road | Danbury, CT 06810 | USA
T (203) 796-2235 | F (203) 796-0380
liang@emerson.com

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorized copying of this message or unauthorized 
distribution of the information contained herein is prohibited.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] seeking information on the throughput requirement using Sqlite

2019-05-03 Thread Stephen Chrzanowski
All of that is going to ENTIRELY depend on your hardware, and how fast it's
going to allow the electrical pulses and actual processing of the different
chips on your mobo, and the kind of data you're going to be adding,
requesting, and removing.  There's no clear-cut answer with the information
provided.

You're going to get throttled if this sits in a VM, bar none.  If you have
other machines that are running on the host, you've got a few layers in the
stack between your VM and the metal that needs to handle your data, as well
as other data in the other VMs.  Mind you, if you have a VM sitting solo on
your host, you'll be darn near 1:1 for Metal:VM speeds, but it isn't
exactly 1:1.
You're going to get throttled if you have other processes taking disk IO
and CPU.
You're going to get throttled at your database design if you don't include
indexes.
You're going to get throttled at the volume of data you're adding and
deleting (It'll be faster to delete as SQLite just marks space as being
available and overwrites existing data, but then you could be looking at
fragmentation not only in the database, but, the disk as well)
The type of file system you're using can also be a throttle point.

The physical database size doesn't really come into play, unless you're
talking billions of index records that needs to be dealt with per call.
Actual data is returned to your application one row at a time, not as a
full chunk of data for your entire request.  On any query, SQLite will
reference indexes heavily (If available, and if it makes sense), and
doesn't touch/look@ anything it doesn't need.

The more hardware you throw at it, the faster you're going to get, but on
top of that, ultimately, you're limited by the speed to which the
electricity flows through the tiny little wires and connections.


On Fri, May 3, 2019 at 10:02 AM Zhu, Liang [AUTOSOL/ASSY/US] <
liang@emerson.com> wrote:

> Hi Sqlite experts,
>
> In my current project, we have the need to inserting and deleting data to
> and from the database between 1 and 10 ms for 24/7.   I am seeking the
> clarification on the following questions
>
>
>   1.  what is the throughput requirements are possible using Sqlite API?,
> in other words, what is the max  speed can my inserting and deleting
> operation be?
>   2.  When we inserting and deleting data at the speed mentioned above,
> what kind database maintenance do we need to do to maintain the performance
> of the database?
>   3.  How is constant deleting and insert effect the database performance?
>
>
> Thank you,
>
> Liang Zhu | Lead Software Engineer | Branson Ultrasonics
> Emerson Automation Solutions | 41 Eagle Road | Danbury, CT 06810 | USA
> T (203) 796-2235 | F (203) 796-0380
> liang@emerson.com
>
> The information contained in this message is confidential or protected by
> law. If you are not the intended recipient, please contact the sender and
> delete this message. Any unauthorized copying of this message or
> unauthorized distribution of the information contained herein is prohibited.
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] seeking information on the throughput requirement using Sqlite

2019-05-03 Thread Simon Slavin
On 3 May 2019, at 3:01pm, Zhu, Liang [AUTOSOL/ASSY/US]  
wrote:

> In my current project, we have the need to inserting and deleting data to and 
> from the database between 1 and 10 ms for 24/7.   I am seeking the 
> clarification on the following questions
> 
>  1.  what is the throughput requirements are possible using Sqlite API?, in 
> other words, what is the max  speed can my inserting and deleting operation 
> be?

The figures you quote are possible.  The fastest speeds I've seen for SQLite 
are 96,000 INSERTs per second.

But SQLite speed depends most on the operating system and hardware (especially 
file storage) system you are using.  Fortunately SQLite performs the same in a 
tiny test program and in a big production program, so you can write a small 
test program to run on your hardware and get useful and interesting results 
from it.

Another thing which influences speed is whether you have one thread/connection 
talking to the database, or have many trying to talk to the database at the 
same time.

>  2.  When we inserting and deleting data at the speed mentioned above, what 
> kind database maintenance do we need to do to maintain the performance of the 
> database?

SQLite databases require no maintenance.  They should continue working forever 
without attention.  However, you might like to use a yearly routine which 
checks the database for corruption.  It might also rebuild the database (like 
defragmenting) which might speed things up slightly, or it might make no change 
at all.

>  3.  How is constant deleting and insert effect the database performance?

SQLite should not slow down much even after millions of changes to a database.  
SQLite automatically reclaims file space released by deleting data.

You might find it useful to read



Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] seeking information on the throughput requirement using Sqlite

2019-05-03 Thread Zhu, Liang [AUTOSOL/ASSY/US]
Hi Sqlite experts,

In my current project, we have the need to inserting and deleting data to and 
from the database between 1 and 10 ms for 24/7.   I am seeking the 
clarification on the following questions


  1.  what is the throughput requirements are possible using Sqlite API?, in 
other words, what is the max  speed can my inserting and deleting operation be?
  2.  When we inserting and deleting data at the speed mentioned above, what 
kind database maintenance do we need to do to maintain the performance of the 
database?
  3.  How is constant deleting and insert effect the database performance?


Thank you,

Liang Zhu | Lead Software Engineer | Branson Ultrasonics
Emerson Automation Solutions | 41 Eagle Road | Danbury, CT 06810 | USA
T (203) 796-2235 | F (203) 796-0380
liang@emerson.com

The information contained in this message is confidential or protected by law. 
If you are not the intended recipient, please contact the sender and delete 
this message. Any unauthorized copying of this message or unauthorized 
distribution of the information contained herein is prohibited.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Richard Hipp
On 5/3/19, Manuel Rigger  wrote:
> I just tried the examples on the int-real branch,
> and it seems that they do not cause any errors.

That is good to hear.

It would be helpful if you could gather together all of your test
cases and send them to us, so that we could add them to the SQLite
regression tests.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Manuel Rigger
Thanks for the feedback! I just tried the examples on the int-real branch,
and it seems that they do not cause any errors.

So I guess these two examples can be considered to be duplicate bug reports
of my "Index on REAL column malfunctions when multiplying with a string"
email.

Best,
Manuel

On Fri, May 3, 2019 at 1:49 PM Richard Hipp  wrote:

> On 5/3/19, Manuel Rigger  wrote:
> > I found another similar example with GLOB:
> >
> > CREATE TABLE test (c0 REAL);
> > CREATE UNIQUE INDEX index_0 ON test(('0' GLOB -c0));
> > INSERT INTO test(c0) VALUES (1.0), (0.0);
> > REINDEX;
> >
> > Is the issue related to the implementation of the unary minus operator
> > (i.e., do both examples trigger the same bug)?
>
> That seems likely.  Did you try this using the int-real branch
> (https://www.sqlite.org/src/timeline?r=int-real)?  That was my latest
> effort to address this corner case. We are working on some more
> pressing concerns right this moment.  I will return to that branch,
> finish testing it, and perhaps merge it to trunk, when I get a chance.
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Richard Hipp
On 5/3/19, Manuel Rigger  wrote:
> I found another similar example with GLOB:
>
> CREATE TABLE test (c0 REAL);
> CREATE UNIQUE INDEX index_0 ON test(('0' GLOB -c0));
> INSERT INTO test(c0) VALUES (1.0), (0.0);
> REINDEX;
>
> Is the issue related to the implementation of the unary minus operator
> (i.e., do both examples trigger the same bug)?

That seems likely.  Did you try this using the int-real branch
(https://www.sqlite.org/src/timeline?r=int-real)?  That was my latest
effort to address this corner case. We are working on some more
pressing concerns right this moment.  I will return to that branch,
finish testing it, and perhaps merge it to trunk, when I get a chance.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Manuel Rigger
I found another similar example with GLOB:

CREATE TABLE test (c0 REAL);
CREATE UNIQUE INDEX index_0 ON test(('0' GLOB -c0));
INSERT INTO test(c0) VALUES (1.0), (0.0);
REINDEX;

Is the issue related to the implementation of the unary minus operator
(i.e., do both examples trigger the same bug)?

Best,
Manuel

On Fri, May 3, 2019 at 1:22 PM Manuel Rigger 
wrote:

> Hi,
>
> I found another very specific corner case for which an index fails
> ("UNIQUE constraint failed: index 'index_0'"):
>
> CREATE TABLE test (c0 REAL);
> CREATE UNIQUE INDEX index_0 ON test(LENGTH(-c0));
> INSERT INTO test(c0) VALUES (0.0), ('10:');
> REINDEX;
>
>
> Best,
> Manuel
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index on REAL malfunctions in combination with the LENGTH function

2019-05-03 Thread Manuel Rigger
Hi,

I found another very specific corner case for which an index fails ("UNIQUE
constraint failed: index 'index_0'"):

CREATE TABLE test (c0 REAL);
CREATE UNIQUE INDEX index_0 ON test(LENGTH(-c0));
INSERT INTO test(c0) VALUES (0.0), ('10:');
REINDEX;


Best,
Manuel
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug report: Segfault in sqlite3_clear_bindings when statement is nullptr

2019-05-03 Thread Christof Arnosti
Hi,

Since the last mail I sent was not really readable due to problems with
my mail settings I resend the possible bug report below.


I want to report a (possible) bug in sqlite3. When
sqlite3_clear_bindings is called with a nullptr argument, then a
SEGFAULT occurs.

From the behavior of the other methods which use statement as a
parameter I expected the behaviour that sqlite3_clear_bindings with a
nullptr argument is a no-op, since all of sqlite3_bind_*, sqlite3_step,
sqlite3_reset and sqlite3_finalize (that's the ones I checked) don't crash.

Please have a look at the example below.

#include 
#include "sqlite3.h"

sqlite3* db;

int main()
{
    // DB Setup
    sqlite3_open(":memory:", );
    sqlite3_exec(db, "CREATE TABLE test (id INTEGER PRIMARY KEY);",
nullptr, nullptr, nullptr);

    // Working example
    sqlite3_stmt* workingStatement;
    sqlite3_prepare(db, "SELECT * from test where id = ?", -1,
, nullptr);
    sqlite3_bind_int(workingStatement, 0, 0);
    sqlite3_step(workingStatement);
    sqlite3_clear_bindings(workingStatement);
    sqlite3_reset(workingStatement);
    sqlite3_finalize(workingStatement);

    // Crashing example. The Statement can't be created because of the
nonexisting table.
    sqlite3_stmt* nonWorkingStatement;
    sqlite3_prepare(db, "SELECT * from nonexisting where id = ?", -1,
, nullptr);
    sqlite3_bind_int(nonWorkingStatement, 0, 0);
    sqlite3_step(nonWorkingStatement);
    sqlite3_clear_bindings(nonWorkingStatement); // SEGFAULT
    sqlite3_reset(nonWorkingStatement);
    sqlite3_finalize(nonWorkingStatement);
}

Thanks for your great work!

Best regards
Christof Arnosti

-
This e-mail is confidential and may contain privileged information. It is 
intended only for the addressees. If you have received this e-mail in error, 
kindly notify us immediately by telephone or e-mail and delete the message from 
your system. 
-
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Go & SQLite asserts

2019-05-03 Thread Rowan Worth
On Fri, 3 May 2019 at 16:03, Dominique Devienne  wrote:

> On Mon, Apr 29, 2019 at 9:49 PM Russ Cox  wrote:
>
> > On Mon, Apr 29, 2019 at 3:28 PM Richard Hipp  wrote:
> > For what it's worth, it was not clear to me until just now that the
> article
> > existed to push back on a general "asserts considered harmful" notion. I
> > was reading it as primarily documenting SQLite coding conventions. The
> > reference to Go makes more sense to me now.
> >
>
> Very interesting discussion between two of my very favorite programmers.
> Thanks.
>

Seconded!


> But I also regret the lack of invariant/always/never (I dare not say
> assert...) in Go. I wish Robert/Rob/Ken/Russ instead of shunning
> "assertions" would
> promote "proper use" of invariant/always/never by making them a built-in
> part of the language, and have the compiler perform the kind of static
> analysis
> Richard mentions in this thread. My $0.02. --DD
>

This is getting off-topic, but the immediate question arising from this
proposal is what would you _do_ in response to a violation? For those
unfamiliar with go, the language doesn't feature exceptions. The only
stack-unwinding mechanism is to call panic(), which will terminate the
entire process (unless captured by a call to recover() before unwinding the
entire stack).

Go is also a very modular language, designed to be very easy to drop other
people's code/libraries into your project. But there's a very clear
convention set out surrounding the use of panic() - it should never form
part of a module's public API. It's not an error reporting mechanism and
callers should never be expected to invoke recover() just to use your
module.

I can't see another way to implement invariant/always/never other than
creating some "blessed" form of panic(), and if you do that then the
cross-module convention shifts from a very clear "DO NOT" to a more subtle
"UNLESS THINGS HAVE GONE REALLY SIDEWAYS." Now there's a judgement call
involved in whether it's acceptable for a given API call to bring the whole
process down when provided nonsense state, and since judgement varies
between individuals the effect that has on the entire go ecosystem could be
huge.

And I guess this is the core of the disagreement - when used "correctly"
assertions are informative and helpful but there's no way to enforce
"correct" usage. Admittedly you could say this about a lot of programming
constructs but I don't think we can do away with arrays just yet!

But it also highlights one of the great things about go, which is that you
don't have to agree with all its design decisions to reap their benefits.
There's real advantages to sticking to your guns when it comes to
conceptual integrity, even if it doesn't suit absolutely everybody. It's
why we like the "lite" in sqlite, no?

-Rowan (aka sqweek -- hi Russ ^_^)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Go & SQLite asserts

2019-05-03 Thread Dominique Devienne
On Mon, Apr 29, 2019 at 9:49 PM Russ Cox  wrote:

> On Mon, Apr 29, 2019 at 3:28 PM Richard Hipp  wrote:
> For what it's worth, it was not clear to me until just now that the article
> existed to push back on a general "asserts considered harmful" notion. I
> was reading it as primarily documenting SQLite coding conventions. The
> reference to Go makes more sense to me now.
>

Very interesting discussion between two of my very favorite programmers.
Thanks.
(programmer here in the very general sense of developer, architect,
designer, even perhaps "code-artist")

I also agree with all your comments about the benefits of invariant(x) when
> it is used properly.
>

I have no business being in this discussion between giants...

But I also regret the lack of invariant/always/never (I dare not say
assert...)
in Go. I wish Robert/Rob/Ken/Russ instead of shunning "assertions" would
promote "proper use" of invariant/always/never by making them a built-in
part of
the language, and have the compiler perform the kind of static analysis
Richard
mentions in this thread. My $0.02. --DD
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users