Re: [sqlite] Fwd: [sqlite-forum] Convert datetime string to second since Epoch with millisecond precision

2020-03-17 Thread Keith Medcalf

select (strftime('%J', '2016-06-13T09:36:34.123Z') - 2440587.5) * 86400.0;
1465810594.123

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Dominique Devienne
>Sent: Tuesday, 17 March, 2020 10:37
>To: General Discussion of SQLite Database us...@mailinglists.sqlite.org>
>Subject: [sqlite] Fwd: [sqlite-forum] Convert datetime string to second
>since Epoch with millisecond precision
>
>Reposting to the ML, maybe I'll have more luck there, than in the forum?
>--DD
>
>-- Forwarded message -
>From: ddevienne 
>Date: Tue, Mar 17, 2020 at 5:09 PM
>Subject: [sqlite-forum] Convert datetime string to second since Epoch
>with
>millisecond precision
>To: 
>
>
>Forum post by ddevienne on 2020-03-17 16:09:17
>https://sqlite.org/forum/forumpost/0d9c338ff1
>
>Hi. Below does what I want, but surely there's a better way than parsing
>the datetime 3 times? Because the below is ugly as hell. Thanks, --DD
>
>```
>C:\Users\ddevienne>sqlite3
>SQLite version 3.28.0 2019-04-16 19:49:53
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> select
>   ...>cast(strftime('%s', '2016-06-13T09:36:34.123Z') as real) +
>   ...> strftime('%f', '2016-06-13T09:36:34.123Z') -
>   ...>cast(strftime('%S', '2016-06-13T09:36:34.123Z') as real)
>   ...> ;
>1465810594.123
>sqlite>
>```
>--
>Subscription info:
>https://sqlite.org/forum/alerts/54F6DD420B31FA7F9F69F5498F1631F5E6D4B48CF
>97539FDEEF90F71733E90A9
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot select specific columns from temp.sqlite_master unless temp.sqlite_master is aliased

2020-03-13 Thread Keith Medcalf

On Friday, 13 March, 2020 20:14, Justin Ng  wrote:

>I just encountered something weird with "temp" and "sqlite_master".
>I was wondering if it was another bug, or intentional.

The sqlite_master table in "temp" is called "sqlite_temp_master"  
(temp.sqlite_temp_master) even though it responds to the spelling 
temp.sqlite_master as well.  In either case the alias name is 
"sqlite_temp_master", not "sqlite_master".  You can specify your own alias name 
which will be respected.

Three level qualifiers are not recognized.  That is, you cannot use 
.. to identify a column -- it won't work.

I cannot answer why this is as it is, merely that this is indeed how it is.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New SQLite Forum established - this mailing list is deprecated

2020-03-12 Thread Keith Medcalf

Uck.  That is the most horrible looking thing I have ever seen in my life.  
Good luck with it.


-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Richard Hipp
>Sent: Thursday, 12 March, 2020 15:29
>To: SQLite mailing list 
>Subject: Re: [sqlite] New SQLite Forum established - this mailing list is
>deprecated
>
>On 3/12/20, no...@null.net  wrote:
>> I am wondering what (apparently invisible)
>> anti-spam features are present.
>
>I will be happy to discuss that, and any other questions you have, on
>the Forum.  :-)
>
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maintaing a Last updated timestamp column using triggers

2020-03-12 Thread Keith Medcalf
>So is "julianday('now')" non-deterministic while "julianday()" _is_
>deterministic? That seems a little weird considering they're the same
>thing... right?

Yes.  Same as "datetime(julianday(), '+1 day')" and datetime(datetime(), '+1 
day') are deterministic but "datetime('now', '+1 day')" is not even though they 
all have the same result.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Maintaing a Last updated timestamp column using triggers

2020-03-12 Thread Keith Medcalf

On Thursday, 12 March, 2020 09:37, David Blake  wrote:

>What stops the
>UPDATE ports SET timeofday = CURRENT_TIMESTAMP WHERE id = NEW.id ;
>from also triggering the AFTER UPDATE ON recursively?

>Perhaps a pragma or inSQLite are  triggers non-recursive by default?

>I am using (now I have by semi-colons right)
>CREATE TRIGGER tgrUpdate AFTER UPDATE ON myTable FOR EACH ROW
>WHEN NEW.LastUpdate <= OLD. LastUpdate
>BEGIN
>UPDATE myTable SET  LastUpdate = DATETIME('now') WHERE id = OLD.id;
>END

>My intention is for the when to avoid infinite calls, but maybe I am
>fooling myself.

If you do not want the trigger to fire recursively you have to write it in such 
a way that it will not fire recursively since anyone can turn recursion on or 
off at any time.  Just because you decided to write a trigger that requires 
that recursive_triggers be turned off does not mean that recursive_triggers are 
turned off, merely that your design is insufficient.

It also depends if you want the "lastupdate" field to be an auditable field 
(that is, it is only changed when a row is updated and cannot otherwise be 
changed) or not.  If you want to make it an auditable field that cannot be 
tampered with, then you need many triggers to make that work properly.  

Alternatively, you can use the new GENERATED ALWAYS AS (...) STORED to make it 
an automatically updated stored field and you do not need triggers at all, just 
a version of SQLite3 that does generated columns (version 3.31.0 from 
2020-01-22 or later).

create table MyData
(
  id   integer primary key,
  data,
  lastupdate real as (julianday()) stored
);

(of course, you can put other "stuff" such as storing a iso8601 text timestamp 
if you want to (a) use more space and (b) have less precision)
(if you want a unixepoch secondstamp the use ((julianday()-2440587.5)*86400.0). 
 Resolution is only to the millisecond as that is all that is maintained 
internally and even the julianday double precision floating point format has 
enough significant digits to accurately portray milliseconds until well after 
we have to solve the Year 10K problem.)

Generated columns makes auditable "lastupdate" type data as simple to implement 
as using triggers to implement "createdon" type auditable data fields.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] COALESCE() does not short-circuit sometimes

2020-03-11 Thread Keith Medcalf

On Wednesday, 11 March, 2020 09:24, Justin Ng  
wrote:

>Sometimes, when querying data, rather than letting NULLs propagate,
>it might be better to throw an error if NULLs are not expected from an
>expression.

>The presence of NULLs might indicate an error in logic, data, or both.

Yes, it very well might.  I believe there is a term for a programmer that uses 
sentinels and then forgets to handle them.  There is even a diagnostic code for 
it.  Some people offend themselves when they see the code, however, because it 
lets their secret out of the bag.


>So, the COALESCE(x, ABS(-9223372036854775808)) construct is a hack to
>sanity-check queries.

It is not a "hack" because it does not work.  It is what is called a "failed 
attempt" at a hack.

However, your assessment that coalesce is not "short-circuiting" is incorrect.  
If x is not null, then x is returned and the constant expression is not 
returned.  If the constant expression were a correlated expression, then it 
would not be evaluated in the candidate context.  If x were null and the 
expression was not constant (and thus required evaluation) then it would be 
evaluated within the current context and its result returned.

The documentation DOES NOT say that the arguments to coalesce will not be 
evaluated, merely that the first one that is not-null will be returned.

https://sqlite.org/lang_corefunc.html#coalesce

>Something like THROW_IF_NULL(x)

You could certainly write a function that did that.  It would certainly be 
better than a "hack" that does not even work.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No feedback for executing a mathematical expression

2020-03-10 Thread Keith Medcalf

On Tuesday, 10 March, 2020 01:22, Octopus ZHANG  wrote:

>I try to run a simple math expression, but SQLite gives no feedback :

>sqlite> select 99-(55/(30/57));

>Should I expect it to return nothing?

It is returning something.  It is returning NULL.

sqlite> .nullvalue 
sqlite> select 99-(55/(30/57));

sqlite>

99 - (55 / (30 / 57))

30 / 57 -> 0

55 / 0 -> NULL

99 - NULL -> NULL

If you want the result of 30/57 to be a floating point number (ie, not zero), 
you need to have one of those numbers be floating point, after which each 
successive operation will be carried out in floating point rather than integer 
arithmetic.

30. / 57 == 30 / 57. == 30. / 57. -> 0.526315789473684

55 / 0.526315789473684 -> 104.5

99 - 104.5 -> -5.5

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to open an sqlite3 for reading without being blocked by process writing to it?

2020-03-09 Thread Keith Medcalf

On Monday, 9 March, 2020 18:18, Peng Yu  wrote:

>But I never experience the problem in my original email when I used
>python3's default sqlite3 module (WAL was not used). What is the
>difference between the default sqlite3 module and apsw? Thanks.

THe relevant difference is that the sqlite3 wrapper sets a default busy timeout 
of 5000 milliseconds when opening a connection, which you can change with the 
timeout parameter on the sqlite3.connect connection constructor.

APSW does not do anything behind your back, so if you want a non-zero 
busy_timeout you have to set one.

cn = apsw.Connection( ... )

cn.setbusytimeout(5000)
or
cn.cursor().execute('pragma busy_timeout=5000;')

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3: .width counts bytes, not characters

2020-03-09 Thread Keith Medcalf
>I use sqlite3 (sqlite3 --version = "3.11.0 2016-02-15 17:29:24
>3d862f207e3adc00f78066799ac5a8c282430a5f" on Ubuntu 16.04.6 LTS) for

Have you tried version more recent than 4 years and 1 month old?  
I think some of these issues may have been fixed in the last couple of years.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to open an sqlite3 for reading without being blocked by process writing to it?

2020-03-09 Thread Keith Medcalf

On Monday, 9 March, 2020 08:33, Simon Slavin  wrote:

>If your .shm and .wal files still exist when no apps are accessing the
>database, the most likely cause is that at least one of the apps is not
>closing its connection correctly.

or you are opening the database connection with SQLITE_OPEN_READNLY because a 
READONLY connection cannot delete the shm and wal files as that would require 
writing, and that readonly connection happens to be the last one to close.

In other words where there are multiple connections to a WAL database, the 
associated WAL and SHM files will be deleted when the last connection to that 
database closes cleanly *unless* that connection is not permitted to write (was 
opened with SQLITE_OPEN_READONLY) or was otherwise configured not to delete the 
WAL and SHM files.

This is, of course, documented.

The original unreferenced contextless fragment of text was this:

There is an additional quasi-persistent "-wal" file and "-shm" shared
memory file associated with each database, which can make SQLite less
appealing for use as an application file-format.

This is because now you have three things which may in certain circumstances 
constitute 1 thing.  If journal_mode=DELETE you only have one thing be one 
thing -- the database file -- and if there is a -journal file then you know 
something "is broked".  However in journal_mode=WAL it is not so simple.  More 
watts (not merely milliwatts) much be expended to understand why there may be 
three things consitituting one thing, and that it does not necessarily indicate 
any "brokeness" but may rather be a necessary and normal state of affairs.  Or 
not.  But an external observed will not be able to tell.  Hence a version of 
Word that stores its documents in an SQLite database in WAL mode may not be 
suitable for use by someone who expects that "one document is one file".

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to open an sqlite3 for reading without being blocked by process writing to it?

2020-03-08 Thread Keith Medcalf

On Sunday, 8 March, 2020 21:24, Peng Yu  wrote:

>When I open an sqlite3 db using the following python code,

>conn=apsw.Connection(filepath, flags = apsw.SQLITE_OPEN_READONLY)

>, I got the following error.

>Traceback (most recent call last):
>  File "/xxx.py", line 21, in 
>for x in c.execute('SELECT (data) FROM sqlar'):
>  File "src/cursor.c", line 236, in resetcursor
>apsw.BusyError: BusyError: database is locked

>The db file is currently processed by a python script which opens the
>db file for writing in the following way.

>conn = apsw.Connection(filepath)

>Since the first process just reads, I'd like it not be blocked. What
>is the correct way to do so? Thanks.

Opening a connection with the SQLITE_OPEN_READONLY only means that the 
connection cannot write to the database using that connection.  It does not 
affect the locking and transaction system in any way.  Merely that if you try 
to "write" using the readonly connection that you will get an error to the 
effect that the connection is read-only.

Perhaps you want to set a busy timeout or use WAL journal mode.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger name missing

2020-03-07 Thread Keith Medcalf

You mean like take a "boo" at the defined triggers?

select name, tbl_name, sql from sqlite_master where type='trigger';

would pretty much make clear that the defined trigger is not what you thought 
it was ...

create table x(x);
create trigger after insert on x begin select 1; end;

select name, tbl_name, sql from sqlite_master where type='trigger';
after|x|CREATE TRIGGER after insert on x begin select 1; end

seems pretty clear that the sql statement creates a trigger called "after" on a 
table called "x" and that the after was parsed as the trigger name.


-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of John G
>Sent: Saturday, 7 March, 2020 09:49
>To: SQLite mailing list 
>Subject: Re: [sqlite] Trigger name missing
>
>Would it be possible to create an SQL verification program, which just
>like
>'sqlite3_analyzer' and 'sqldiff' could be run separately?
>It could *warn* about apparently incompletely defined triggers and other
>possible pitfalls.
>
>Then developers could use it before installing the next version of
>SQLite3.
>
>John G
>
>On Wed, 26 Feb 2020 at 19:09, Jean-Luc Hainaut luc.hain...@unamur.be>
>wrote:
>
>> On 26/02/2020 12:18, Richard Hipp wrote:
>> > On 2/26/20, Jean-Luc Hainaut  wrote:
>> >> Hi all,
>> >>
>> >> It seems that SQLite (version 31.1) accepts a trigger declaration in
>> >> which the name is missing. When fired, this trigger doesn't crashes
>but
>> >> exhibits a strange behaviour. In particular, while expression
>> >> "new." in an "insert" trigger returns the correct value,
>the
>> >> equivalent expression "select  from T where Id = new.Id"
>always
>> >> returns null (column "Id" is the PK of table "T"). Similarly,
>"update T
>> >> set  =   where Id = new.Id" (silently)
>fails.
>> >>
>> > What is the text of your trigger?
>>
>> This trigger belongs to a small experimental application I'm writting
>to
>> study the extent to what application code (initially in Java, Python,
>> etc.) can be integrated into SQL, notably through triggers. In short,
>> can one convert a standard 3-tier business application into just a GUI
>+
>> an active database, without the standard application program between
>them?
>> The following trigger controls the registration of a customer order
>> [insert into CUSTORDER(OrdID, CustID,ItemID,Qty,...) values (...)] when
>> the available quantity (Qavail) of the requested item is sufficient.
>> If the name 'CORD_INS1' is missing, this trigger (among others):
>>- updates the ITEM table. [successfully]
>>- completes the customer order (Price and State in CUSTORDER).
>[fails]
>>- creates an invoice (in CUSTINVOICE) and prints it in a text file.
>> [successfully]
>>
>> After reading all your explanations and comments, my interpretation is
>> as follows:
>> 1. The SQLite syntax tells me that the "before/after/instead of"
>keyword
>> can be missing, in which case (I guess) "before" is assumed.
>> 2. So, my "name-less" trigger is valid and must be read:
>> create trigger "after" before insert on CUSTORDER ...
>> 3. In a "before" trigger, the current row cannot be updated, since it
>> doesn't exist yet (though several RDBMS have a specific syntax for
>that).
>> 4. This explains why SQLite legitimely ignores the second update.
>> Am I right?
>> If I am, this behaviour is "not a bug but a feature". It could be
>useful
>> to precise these facts in the documentation.
>>
>> Thanks to all
>>
>> Jean-Luc Hainaut
>>
>> create table CUSTOMER (CustID,Name,Address,City,Account,...);
>> create table ITEM
>> (ItemID,Description,Price,QonHand,Qord,Qavail,Qmargin,...);
>> create table CUSTORDER
>(OrdID,DateOrd,CustID,ItemID,Price,Qty,State,...);
>> create table CUSTINVOICE
>> (InvID,DateInv,OrdID,CustID,ItemID,Price,Qty,Amount,State,...);
>> create table SUPPLIER (SuppID,Name,City,...);
>> create table OFFER (SuppID,ItemID,Price,Delay,...);
>> create table SUPPORDER
>(OrdID,DateOrd,ItemID,SuppID,Price,Qty,State,...);
>>
>> create trigger CORD_INS1
>> after insert on CUSTORDER
>> for each row
>> when new.Qty <= (select Qavail from ITEM where ItemID = new.ItemID)
>> and  not exists (select * from CUSTINVOICE where OrdID = new.OrdID)
>> begin
>> --
>> -- Subtract Qty from Qavail:
>> update ITEM
>> set   Qavail = Qavail - new.Qty
>> where ItemID = new.ItemID;
>> --
>> --...
>> -- Set CUSTORDER.State to 'invoiced' or 'pending'
>> update CUSTORDER
>> set   Price = (select Price from ITEM where ItemID = new.ItemID),
>>   State = case when new.Qty <= (select QonHand from ITEM where
>> ItemID = new.ItemID)
>>then 'invoiced'
>>else 'pending'
>>   end
>> where OrdID = new.OrdID;
>> --
>> -- Create an invoice and print it:
>> insert into CUSTINVOICE(...);
>> --
>> end;
>>
>> 

Re: [sqlite] Bug in SQLite version 3.31.1 and 3.32?

2020-03-06 Thread Keith Medcalf

On Friday, 6 March, 2020 19:25, Richard Damon  wrote:

>It is sort of like NaN, where a Nan is neither less than, greater than
>or equal to any value, including itself.

NULL (as in SQL NULL) means "missing value" or "unknown".  NULL represents any 
value within the domain, we simply do not know what that value is.  That is, 
the value "NULL" for colour of a car means that we do not know the colour -- 
however, it still has one.

NaN, on the other hand, means that the value is outside the domain and that 
there is no possible value of the domain which well render the proposition true.

For example, the state of Schroedingers Cat is NULL.  It has a state, either 
dead or alive.  That state is merely unknown until one looks in the box.  
However, if when you looked in the box there was no cat, then the cat would be 
a NaN since its state was outside of the domain of states for a cat in a box 
with a time release poison after the release of the poison, that is, the 
non-existance of a cat in the box precludes the possibility of the state of the 
cat in the box being either either dead or alive.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in SQLite version 3.31.1 and 3.32?

2020-03-06 Thread Keith Medcalf

On Friday, 6 March, 2020 17:48 Xinyue Chen  wrote:

...

>select t1.textid a, i.intid b
>  from t t1,
>   i i
> where (t1.textid = i.intid and t1.textid in (12) and t1.textid = i.intid) 
>or (t1.textid = null IS NOT FALSE)
>group by i.intid, t1.textid;

I got rid of all the extra brackets to make this easier to read.

The where clause wants either (a bunch of and joined conditions) to be true OR 
(another condition to be true).  We will ignore the first set of AND joined 
conditions since they appear to do what you want and instead deal with the 
handling of NULL values and tri-state logic from the second OR joined 
condition, which appears to be the one causing problems.

The expression (t1.textid = null) is always null (it is neither True nor 
False), no matter what the value of t1.textid because any value compared to 
NULL is NULL.  If you want to know whether t1.textid is null then you write 
"t1.textid is null" or (conversely) "t1.textid is not null"

NULL is FALSE -> False (NULL is not False)
NULL is TRUE  -> False (NULL is not True either)
NULL is not FALSE -> True  (it is True that NULL is not FALSE)
NULL is not TRUE  -> True  (it is True that NULL is not TRUE)

Since the condition that you have specified (t1.textid = NULL) IS NOT FALSE 
will always be true, then the logic value of condition on the "other side" of 
the OR is irrelevant -- the WHERE clause will always be TRUE.  This condition 
holds for any not null value you use in place of TRUE or FALSE.  That is:

NULL is 1 -> False (NULL is not 1)
NULL is 0 -> False (NULL is not 0 either)
NULL is not 1 -> True  (NULL is indeed not 1)
NULL is not 2 -> True  (NULL is indeed not 2)

This result will be the same if you change the IS NOT FALSE to IS NOT TRUE.  
However, if you specify IS TRUE or IS FALSE then this expression will always be 
FALSE and the value of the WHERE clause will depend on the result of the first 
set of AND joined conditions.

So your original query must and always devolves to:

select t1.textid a, i.intid b
from t t1,
 i i
group by i.intid, t1.textid;

for which the correct results are produced.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Status of LSM1 extension

2020-03-05 Thread Keith Medcalf

On Thursday, 5 March, 2020 20:39, Charles Leifer  wrote:

>Keith, if you could share a bit more details on how you do that, I'd be
>interested.

I presume you mean how to create a "built-in" extension, which is available for 
all connections, just the built-in functions and modules.

There is a built-in define SQLITE_EXTRA_INIT used in main.c which allows for a 
"user function" to be called at the end of the SQLite3 initialization process.  
You compile the code/amalgamation with SQLITE_EXTRA_INIT defined to the name of 
a function that takes a NULL parameter and returns an SQLITE error code 
(SQLITE_OK, SQLITE_ERROR, etc).  This function is called *after* the SQLite3 
core is initialized.  It does additional initialization of the library.

For example, to autoload some extensions on every connection you can append the 
code for the extension to the amalgamation sqlite3.c file and then append you 
EXTRA_INIT function to add the extension init procedure to the auto extension 
list:

-DSQLITE_EXTRA_INIT=coreinit

static int coreinit(const char* dummy)
(
return sqlite3_auto_extension((void*)sqlite3_series_init);
}

If you append series.c to the amalgamation, and then the coreinit function, and 
compile with -DSQLITE_EXTRA_INIT=coreinit, then when SQLite3 is initialized the 
coreinit function will run after the initialization is complete and add the 
sqlite3_series_init function to the auto_extension list.  Then when any new 
connection is opened the generate_series extension will be registered on that 
connection.

Of course, the extensions and the coreinit function do not have to be part of 
the amalgamation compilation unit -- they can be in a separate file and 
statically linked.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Status of LSM1 extension

2020-03-05 Thread Keith Medcalf

On Thursday, 5 March, 2020 05:51, Dominique Devienne  
wrote:

>PS: I'd still very much appreciate an LSM1 amalgamation

cd ext/lsm1
tclsh tool/mklsm1c.tcl

which will write an lsm1.c amalgamation in the current directory (ext/lsm1)

You can append this to the amalgamation and use an EXTRA_INIT hook to 
initialize it, just like building in any other extension (though you need to 
define SQLITE_ENABLE_LSM1 in order for the extension code to be compiled)

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Report bug found in SQLite version 3.31.1

2020-03-02 Thread Keith Medcalf

Perhaps this is the same constant propagation bug that was fixed recently?

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Monday, 2 March, 2020 17:02
>To: SQLite mailing list 
>Subject: Re: [sqlite] Report bug found in SQLite version 3.31.1
>
>
>No reproduco
>
>SQLite version 3.32.0 2020-03-02 22:04:51
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> CREATE TABLE t (
>   ...>   textid TEXT
>   ...> );
>sqlite> INSERT INTO t
>   ...> VALUES ('12');
>sqlite> INSERT INTO t
>   ...> VALUES ('34');
>sqlite> CREATE TABLE i (
>   ...>   intid INTEGER PRIMARY KEY
>   ...> );
>sqlite> INSERT INTO i
>   ...> VALUES (12);
>sqlite> INSERT INTO i
>   ...> VALUES (34);
>sqlite> CREATE TABLE e (
>   ...>   x INTEGER PRIMARY KEY NOT NULL,
>   ...>   y TEXTNOT NULL
>   ...> );
>sqlite> -- Original query
>sqlite> select t1.textid a, i.intid b
>   ...> from t t1,
>   ...>  i i
>   ...> where ((t1.textid = i.intid) and (t1.textid = 12));
>12|12
>sqlite> -- Buggy query
>sqlite> select distinct t1.textid a, i.intid b
>   ...> from t t1,
>   ...>  i i,
>   ...>  t vucjp
>   ...> where ((t1.textid = i.intid) and (t1.textid = 12) and (t1.textid
>=
>   ...> vucjp.textid) and (t1.textid = vucjp.textid));
>12|12
>
>
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Xinyue Chen
>>Sent: Monday, 2 March, 2020 16:41
>>To: SQLite mailing list 
>>Subject: Re: [sqlite] Report bug found in SQLite version 3.31.1
>>
>>Hi josé,
>>
>>This bug is found in 3.31.1 but you are running it in 3.30.1.
>>
>>Best,
>>Xinyue Chen
>>
>>On Mon, Mar 2, 2020 at 3:36 PM Jose Isaias Cabrera 
>>wrote:
>>
>>> Xinyue Chen, on Monday, March 2, 2020 06:21 PM, wrote...
>>> >
>>> > Hi,
>>> >
>>> > I found a bug in the most recent SQLite release version 3.31.1
>>> 2020-01-27.
>>> > My initial test environment is macOS 10.14.6 (18G87) and I have
>>tested in
>>> > https://sqliteonline.com/.
>>> >
>>> > CREATE TABLE t (
>>> >   textid TEXT
>>> > );
>>> > INSERT INTO t
>>> > VALUES ('12');
>>> > INSERT INTO t
>>> > VALUES ('34');
>>> > CREATE TABLE i (
>>> >   intid INTEGER PRIMARY KEY
>>> > );
>>> > INSERT INTO i
>>> > VALUES (12);
>>> > INSERT INTO i
>>> > VALUES (34);
>>> > CREATE TABLE e (
>>> >   x INTEGER PRIMARY KEY NOT NULL,
>>> >   y TEXTNOT NULL
>>> > );
>>> > -- Original query
>>> > select t1.textid a, i.intid b
>>> > from t t1,
>>> >  i i
>>> > where ((t1.textid = i.intid) and (t1.textid = 12));
>>> > -- Buggy query
>>> > select distinct t1.textid a, i.intid b
>>> > from t t1,
>>> >  i i,
>>> >  t vucjp
>>> > where ((t1.textid = i.intid) and (t1.textid = 12) and (t1.textid =
>>> > vucjp.textid) and (t1.textid = vucjp.textid));
>>> >
>>> > The results for the two queries should be the same, but the result
>>for
>>> the
>>> > first one is 12|12 and for the second one is 12|12, 34|12.
>>> I get 12|12 for both query.
>>>
>>> 18:00:22.46>sqlite3
>>> SQLite version 3.30.1 2019-10-10 20:19:45
>>> Enter ".help" for usage hints.
>>> Connected to a transient in-memory database.
>>> Use ".open FILENAME" to reopen on a persistent database.
>>> sqlite> CREATE TABLE t (
>>>...>   textid TEXT
>>>...> );
>>> sqlite> INSERT INTO t
>>>...> VALUES ('12');
>>> sqlite> INSERT INTO t
>>>...> VALUES ('34');
>>> sqlite> CREATE TABLE i (
>>>...>   intid INTEGER PRIMARY KEY
>>>...> );
>>> sqlite> INSERT INTO i
>>>...> VALUES (12);
>>> sqlite> INSERT INTO i
>>>...> VALUES (

Re: [sqlite] Report bug found in SQLite version 3.31.1

2020-03-02 Thread Keith Medcalf

No reproduco

SQLite version 3.32.0 2020-03-02 22:04:51
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> CREATE TABLE t (
   ...>   textid TEXT
   ...> );
sqlite> INSERT INTO t
   ...> VALUES ('12');
sqlite> INSERT INTO t
   ...> VALUES ('34');
sqlite> CREATE TABLE i (
   ...>   intid INTEGER PRIMARY KEY
   ...> );
sqlite> INSERT INTO i
   ...> VALUES (12);
sqlite> INSERT INTO i
   ...> VALUES (34);
sqlite> CREATE TABLE e (
   ...>   x INTEGER PRIMARY KEY NOT NULL,
   ...>   y TEXTNOT NULL
   ...> );
sqlite> -- Original query
sqlite> select t1.textid a, i.intid b
   ...> from t t1,
   ...>  i i
   ...> where ((t1.textid = i.intid) and (t1.textid = 12));
12|12
sqlite> -- Buggy query
sqlite> select distinct t1.textid a, i.intid b
   ...> from t t1,
   ...>  i i,
   ...>  t vucjp
   ...> where ((t1.textid = i.intid) and (t1.textid = 12) and (t1.textid =
   ...> vucjp.textid) and (t1.textid = vucjp.textid));
12|12



-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Xinyue Chen
>Sent: Monday, 2 March, 2020 16:41
>To: SQLite mailing list 
>Subject: Re: [sqlite] Report bug found in SQLite version 3.31.1
>
>Hi josé,
>
>This bug is found in 3.31.1 but you are running it in 3.30.1.
>
>Best,
>Xinyue Chen
>
>On Mon, Mar 2, 2020 at 3:36 PM Jose Isaias Cabrera 
>wrote:
>
>> Xinyue Chen, on Monday, March 2, 2020 06:21 PM, wrote...
>> >
>> > Hi,
>> >
>> > I found a bug in the most recent SQLite release version 3.31.1
>> 2020-01-27.
>> > My initial test environment is macOS 10.14.6 (18G87) and I have
>tested in
>> > https://sqliteonline.com/.
>> >
>> > CREATE TABLE t (
>> >   textid TEXT
>> > );
>> > INSERT INTO t
>> > VALUES ('12');
>> > INSERT INTO t
>> > VALUES ('34');
>> > CREATE TABLE i (
>> >   intid INTEGER PRIMARY KEY
>> > );
>> > INSERT INTO i
>> > VALUES (12);
>> > INSERT INTO i
>> > VALUES (34);
>> > CREATE TABLE e (
>> >   x INTEGER PRIMARY KEY NOT NULL,
>> >   y TEXTNOT NULL
>> > );
>> > -- Original query
>> > select t1.textid a, i.intid b
>> > from t t1,
>> >  i i
>> > where ((t1.textid = i.intid) and (t1.textid = 12));
>> > -- Buggy query
>> > select distinct t1.textid a, i.intid b
>> > from t t1,
>> >  i i,
>> >  t vucjp
>> > where ((t1.textid = i.intid) and (t1.textid = 12) and (t1.textid =
>> > vucjp.textid) and (t1.textid = vucjp.textid));
>> >
>> > The results for the two queries should be the same, but the result
>for
>> the
>> > first one is 12|12 and for the second one is 12|12, 34|12.
>> I get 12|12 for both query.
>>
>> 18:00:22.46>sqlite3
>> SQLite version 3.30.1 2019-10-10 20:19:45
>> Enter ".help" for usage hints.
>> Connected to a transient in-memory database.
>> Use ".open FILENAME" to reopen on a persistent database.
>> sqlite> CREATE TABLE t (
>>...>   textid TEXT
>>...> );
>> sqlite> INSERT INTO t
>>...> VALUES ('12');
>> sqlite> INSERT INTO t
>>...> VALUES ('34');
>> sqlite> CREATE TABLE i (
>>...>   intid INTEGER PRIMARY KEY
>>...> );
>> sqlite> INSERT INTO i
>>...> VALUES (12);
>> sqlite> INSERT INTO i
>>...> VALUES (34);
>> sqlite> CREATE TABLE e (
>>...>   x INTEGER PRIMARY KEY NOT NULL,
>>...>   y TEXTNOT NULL
>>...> );
>> sqlite> -- Original query
>> sqlite> select t1.textid a, i.intid b
>>...> from t t1,
>>...>  i i
>>...> where ((t1.textid = i.intid) and (t1.textid = 12));
>> 12|12
>> sqlite> -- Buggy query
>> sqlite> select distinct t1.textid a, i.intid b
>>...> from t t1,
>>...>  i i,
>>...>  t vucjp
>>...> where ((t1.textid = i.intid) and (t1.textid = 12) and
>(t1.textid =
>>...> vucjp.textid) and (t1.textid = vucjp.textid));
>> 12|12
>> sqlite>
>>
>> Maybe I am not following your steps, but as you can see above, I am
>> getting the same for both.
>>
>> josé
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to enforce a specific order of group_concat?

2020-03-02 Thread Keith Medcalf

On Monday, 2 March, 2020 09:20, Dominique Devienne  wrote:

>On Mon, Mar 2, 2020 at 5:09 PM Keith Medcalf  wrote:

>> select group_concat(value) from (select distinct value from test order by 
>> value);

>But is that guaranteed to be ordered correctly "forever" instead of by
>"happenstance" from current implementation details? 

>My point was that the Window Function version is ordered "by design", 
>and not an implementation detail (as I think
>the simpler version is).

>Your subquery returns rows in a given order too, but "who" says
>they'll be processed in that order?

>Tables are just "sets of rows" after all, and the relational model is
>about set-theory, no? order by in subquery therefore make little to no 
>sense in nested SQL (in theory...). --DD

Well, in theory an order by in a nested select means that the result of the 
operation is an ordered projection and not merely a set of rows.  For this 
particular case (a nested select with an order by and the outer query with an 
aggregate) the query will not be flattened (#16)

https://sqlite.org/optoverview.html#flattening

Consider that

create table t (x,y);
insert into t values (1,1),(2,2),(3,3),(4,1),(4,2),(4,3),(4,4);
select x,y from (select x, y from t order by y) order by x;

will do two order-by sorts to obtain the result even though the query could be 
(in this particular case) re-written as "select x, y from t order by x, y" 
because a query with an order-by in both the outer and nested query cannot be 
flattened (#11).

This is why putting an "order by" in a view will usually preclude query 
flattening because the view is not merely producing a "set of rows" it is 
producing an "ordered projection" and the ordering must be significant else it 
would not be there.

Of course in the case of the original statement:

select group_concat(distinct value) from test order by value;

The "order by value" might in fact be used to select the use of an index on 
value to scan, rather than the table test, if that results in less I/O that 
scanning the table.  However, the Query Planner doesn't believe aggregate 
functions (including group_concat) results are non-commutative so determines 
that the "order by value" clause is superfluous (perhaps this is an error, 
since other functions such as sum(), total(), avg() can also be non-commutative 
in certain pathological cases and have varying results depending on the 
ordering of the data sent to them, especially avg() since it merely returns 
sum()/count() rather than a successive approximation to the mean, though 
successive approximation still has pathological cases for non-sorted input, 
they are fewer than the sum()/count() method).

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to enforce a specific order of group_concat?

2020-03-02 Thread Keith Medcalf

You mean like:

select group_concat(value) over (order by value rows between unbounded 
preceding and unbounded following) from (select distinct value from test) limit 
1;
and
select group_concat(value) over (order by value desc rows between unbounded 
preceding and unbounded following) from (select distinct value from test) limit 
1;

which seems far more convoluted than just:

select group_concat(value) from (select distinct value from test order by 
value);
and
select group_concat(value) from (select distinct value from test order by value 
desc);


-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Dominique Devienne
>Sent: Monday, 2 March, 2020 08:02
>To: SQLite mailing list 
>Subject: Re: [sqlite] How to enforce a specific order of group_concat?
>
>On Sun, Mar 1, 2020 at 10:58 PM mailing lists 
>wrote:
>> Are there any other solutions / possibilities?
>
>I thought someone more knowledgeable than I about Window Functions [1]
>would answer,
>but since nobody mentioned them so far, I'll do it, as I believe this
>is the "SQL native" way
>to achieve what you want (modulo DISTINCT perhaps). Notably (from the
>doc):
>
>Every aggregate window function can also work as a ordinary aggregate
>function,
>simply by omitting the OVER and FILTER clauses. Furthermore, all of
>the built-in aggregate
>functions of SQLite can be used as an aggregate window function by
>adding an appropriate OVER clause
>
>[2] has an example with group_concat() and OVER (ORDER BY ...). I
>assume that's what you need,
>someone better at Window Functions then me (not difficult!) can
>confirm or not that. --DD
>
>[1] https://www.sqlite.org/windowfunctions.html
>[2] https://www.sqlite.org/windowfunctions.html#aggwinfunc
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to enforce a specific order of group_concat?

2020-03-01 Thread Keith Medcalf

On Sunday, 1 March, 2020 14:58, mailing lists  wrote:

>Assume I create the following table:

>CREATE TABLE Test (ID INTEGER PRIMARY KEY, Value TEXT);
>INSERT INTO Test (Value) VALUES('Alpha');
>INSERT INTO Test (Value) VALUES('Beta');
>INSERT INTO Test (Value) VALUES('Beta');
>INSERT INTO Test (Value) VALUES('Alpha');

>According to the documentation of group_concat the order is undefined,
>indeed:

>SELECT group_concat(DISTINCT Value) FROM Test ORDER BY Value ASC;
>SELECT group_concat(DISTINCT Value) FROM Test ORDER BY Value DESC;

>Both queries result in Alpha,Beta.

>Changing the queries to

>WITH Result(x) AS (SELECT DISTINCT Value FROM Test ORDER BY Value ASC)
>SELECT group_concat(x) FROM Result;
>WITH Result(x) AS (SELECT DISTINCT Value FROM Test ORDER BY Value DESC)
>SELECT group_concat(x) FROM Result;

>leads to the results Alpha,Beta, respectively Beta,Alpha.

>Is this a coincidence or is this guaranteed to work?

>Are there any other solutions / possibilities?

group_concat builds a "group" by concatenating the values sent to it in the 
order they are sent.  If you do not know this order then for all intents and 
purposes the order is "undefined" because it is defined as the order in which 
the query planner decides to visit the rows forming the group.  SQLite3 
believes that all aggregate functions are commutative and that the order in 
which rows are fed into them is immaterial to the result and there (presently) 
is no way to specify that this is not the case.

So in the rare case where the aggregate is not commutative and you depend on 
the presentation order, then you must specify it.  The only built-in aggregate 
that is not commutative is the group_concat function.  If you were to write 
another non-commutative aggregate function, lets say SHA1(...), that computed 
the SHA1 hash of the values fed into it, you would also have to control the 
presentation order or the result would be "undefined".

select group_concat(value) from (select distinct value from test order by value 
desc); will do that.  (rephrasing as a CTE makes no difference)

This works because the query as phrased cannot be flattened since the outer 
query contains an aggregate and the inner query contains an order by.

Moving the distinct does not alter the fact that the query cannot be flattened.

select group_concat(distinct value) from (select value from test order by value 
desc);

Whether the query planner will always not flatten a query where the outer query 
contains an aggregate and the inner query contains an order by is something on 
which I cannot comment other than to say that is does not flatten such a query 
up to now.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statement with ORDER BY specified by column value

2020-02-29 Thread Keith Medcalf

SELECT prop_value FROM Table1 WHERE obj_id=10 AND prop_key='key1' AND 
(prop_tag='ios' OR prop_tag='*') ORDER BY prop_tag == 'ios' DESC LIMIT 1;

You want to order by prop_tag == 'ios' in DESCENDING order.  That is, the true 
(1) before the false (0).  The default ascending sort will sort the false (0).  
 
before the true (1) ...  This will work because prop_tag cannot be null.  If it 
could, then nulls would sort first always (they are the firstest before 
consideration of order by ascending or descending) unless you did something 
about it.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Marco Bambini
>Sent: Saturday, 29 February, 2020 01:38
>To: SQLite mailing list 
>Subject: [sqlite] Select statement with ORDER BY specified by column
>value
>
>Hi all,
>
>Is there a way to specify an ORDER BY clause by column value?
>
>I have a table declared as:
>CREATE TABLE Table1 (id INTEGER PRIMARY KEY AUTOINCREMENT, obj_id
>INTEGER, prop_key TEXT, prop_value TEXT, prop_tag TEXT DEFAULT '*',
>UNIQUE(obj_id, prop_key, prop_tag))
>
>and a sample query:
>SELECT prop_value FROM Table1 WHERE obj_id=10 AND prop_key='key1' AND
>(prop_tag='ios' OR prop_tag='*') ORDER BY (prop_tag='ios') LIMIT 1;
>
>I would like to prioritise results based on the fact that the prop_tag
>column is 'ios'.
>
>Thanks.
>--
>Marco Bambini
>https://www.sqlabs.com
>
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Intersecting multiple queries

2020-02-28 Thread Keith Medcalf

Of course, you could just as well do:

select * from data where (...) AND (...) AND (...) ;

and do away with all the extraneous stuff.

In other words, why would one want to do:

select * from data where uuid in (select uuid from data where twit == 1 
INTERSECT select uuid from data where twat == 1 INTERSECT select uuid from data 
where lastname like 'cricket%' INTERSECT select uuid from data where not 
firstname like 'jimmy%')

when one could simply forgo all the crap and do it directly:

select * from data where (twit == 1) and (twat == 1) and (lastname like 
'cricket%') and (not firstname like 'jimmy%');




-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: Keith Medcalf 
>Sent: Friday, 28 February, 2020 17:37
>To: 'SQLite mailing list' 
>Subject: RE: [sqlite] Intersecting multiple queries
>
>
>select stuff from data where uuid in (select uuid from data where ...
>INTERSECT select uuid from data where ... INTERSECT select uuid from data
>where ... );
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Hamish Allan
>>Sent: Friday, 28 February, 2020 16:02
>>To: SQLite mailing list 
>>Subject: [sqlite] Intersecting multiple queries
>>
>>Hi,
>>
>>I am building a list of UUIDs from multiple queries of the form:
>>
>>SELECT uuid FROM Data WHERE filter LIKE ?
>>
>>with a different bound parameter each time.
>>
>>In app-space code, I'm getting the results of these queries and
>>intersecting them, so that the final list contains only UUIDs returned
>by
>>all of the queries.
>>
>>I'm then populating a temporary table so that a further query can return
>>only rows matching those UUIDs.
>>
>>What I'm wondering is if there's a shortcut to avoid having to build the
>>UUID list in app code -- whether I can perform an intersection in a
>query
>>to build that temporary table without involving app-space code. Or even
>>avoid having to build the temporary table at all?
>>
>>Many thanks,
>>Hamish
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Intersecting multiple queries

2020-02-28 Thread Keith Medcalf

select stuff from data where uuid in (select uuid from data where ... INTERSECT 
select uuid from data where ... INTERSECT select uuid from data where ... 
);

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Hamish Allan
>Sent: Friday, 28 February, 2020 16:02
>To: SQLite mailing list 
>Subject: [sqlite] Intersecting multiple queries
>
>Hi,
>
>I am building a list of UUIDs from multiple queries of the form:
>
>SELECT uuid FROM Data WHERE filter LIKE ?
>
>with a different bound parameter each time.
>
>In app-space code, I'm getting the results of these queries and
>intersecting them, so that the final list contains only UUIDs returned by
>all of the queries.
>
>I'm then populating a temporary table so that a further query can return
>only rows matching those UUIDs.
>
>What I'm wondering is if there's a shortcut to avoid having to build the
>UUID list in app code -- whether I can perform an intersection in a query
>to build that temporary table without involving app-space code. Or even
>avoid having to build the temporary table at all?
>
>Many thanks,
>Hamish
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] After column add, what should be done to update the schema?

2020-02-27 Thread Keith Medcalf

Probably a bug.  SQLiteExpert does not even run on my computer.  It just farts 
in the wind and does not even bother to log or show an error message.  Maybe it 
tries to access some internal Microsoft Spying mechanism that I have disabled.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Andy KU7T
>Sent: Thursday, 27 February, 2020 17:03
>To: SQLite mailing list 
>Subject: [sqlite] After column add, what should be done to update the
>schema?
>
>Hi,
>
>I use a simple ALTER TABLE ADD COLUMN statement. However, when I use the
>Sqlite Expert, the DDL is not reflected. Even though the column is
>created. I read somewhere that the DDL is really just a copy of the text
>when teh table was created. Is there a way to refresh this?
>
>Thanks,
>Andy
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: inserting new data only

2020-02-25 Thread Keith Medcalf

I keep forgetting that the mix/max optimization is not applied at the same time 
as retrieving other data from a table, so if you actually want to optimize the 
generated plan, you need to use the following trigger:

create trigger data_insert before insert on data
begin
select raise(ABORT, 'Out of order insert')
 where julianday(new.key) <= coalesce((select julianday(max(key)) from 
data), -1);
select raise(IGNORE)
 where new.data IS (select data from data order by key desc limit 1);
end;

The query optimizer is smart enough to recognize that max(key) when used by 
itself and an index is available on key, that the index can be used to locate 
the row containing the max key (it is the last one and there is no need to look 
any further).  However, if you select max(key), data from table then the 
optimization is not applied and a scan of the whole table is done looking for 
the max(key) even though there is a suitable index, and you have to specify the 
index and how to use it and that you only need 1st result.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Tuesday, 25 February, 2020 14:44
>To: SQLite mailing list 
>Subject: Re: [sqlite] Fwd: inserting new data only
>
>
>If you are going to do it in all in one insert statement rather than
>using a before trigger to throw an error (ie, you want to silently ignore
>out-of-order inserts) then the following is slightly more efficient since
>the query planner appears to materialize the search so only does it once:
>
>with old(key, data)
>  as (
>  select coalesce(julianday(max(key)), -1),
> data
>from data
> ),
> new(key, data)
>  as (
>values (?, ?)
> )
>insert into data
> select new.key, new.data
>   from new, old
>  where new.data IS NOT old.data
>and julianday(new.key) > old.key;
>
>However, without the trigger the database will not enforce the
>monotonicity of the insert timestamps ...
>
>Note that you could do the whole thing in before trigger's which would
>mean you just use a regular old insert and the triggers do all the work,
>and then the database would entirely enforce its own integrity and rules
>... no matter who or what was trying to insert records ...
>
>create table data
>(
>keytext primary key,
>data   integer not null
>)
>without rowid;
>
>create trigger data_prohibit_oo_inserts before insert on data
>  when julianday(new.key) <= (select julianday(max(key)) from data)
>begin
>select raise(ABORT, 'Out of order insert');
>end;
>
>create trigger data_prohibit_duplicates before insert on data
>  when new.data IS (select data from (select max(key), data from data))
>begin
>select raise(IGNORE);
>end;
>
>
>-- insert into data values (?, ?);
>
>insert into data values ('10:32', 12);
>insert into data values ('10:35', 15);
>insert into data values ('10:37', 15);
>insert into data values ('10:39', 13);
>insert into data values ('10:43', 13);
>insert into data values ('10:46', 18);
>
>select * from data;
>10:32|12
>10:35|15
>10:39|13
>10:46|18
>
>insert into data values ('10:32', 12);
>Error: near line 33: Out of order insert
>insert into data values ('10:35', 15);
>Error: near line 34: Out of order insert
>insert into data values ('10:37', 15);
>Error: near line 35: Out of order insert
>insert into data values ('10:39', 13);
>Error: near line 36: Out of order insert
>insert into data values ('10:43', 13);
>Error: near line 37: Out of order insert
>insert into data values ('10:46', 18);
>
>select * from data;
>10:32|12
>10:35|15
>10:39|13
>10:46|18
>
>You could even do that with just one before trigger ...
>
>create table data
>(
>keytext primary key,
>data   integer not null
>)
>without rowid;
>
>create trigger data_insert before insert on data
>begin
>select raise(ABORT, 'Out of order insert')
> where julianday(new.key) <= coalesce((select julianday(max(key))
>from data), -1);
>select raise(IGNORE)
> where new.data IS (select data from (select max(key), data from
>data));
>end;
>
>-- insert into data values (?, ?);
>
>insert into data values ('10:32', 12);
>insert into data values ('10:35', 15);
>insert into data values ('10:37', 15);
>insert into data values ('10:39', 13);
>insert into data values ('10:43', 13);
>insert into data values ('10:46', 18);
>
>select * from data;
>10:32|12
>10:35|15
>10:39|13
>10:46|18
>
>insert into data values ('10:32', 12);
>Error: near line 28: Out 

Re: [sqlite] Fwd: inserting new data only

2020-02-25 Thread Keith Medcalf

If you are going to do it in all in one insert statement rather than using a 
before trigger to throw an error (ie, you want to silently ignore out-of-order 
inserts) then the following is slightly more efficient since the query planner 
appears to materialize the search so only does it once:

with old(key, data)
  as (
  select coalesce(julianday(max(key)), -1),
 data
from data
 ),
 new(key, data)
  as (
values (?, ?)
 )
insert into data
 select new.key, new.data
   from new, old
  where new.data IS NOT old.data
and julianday(new.key) > old.key;

However, without the trigger the database will not enforce the monotonicity of 
the insert timestamps ...

Note that you could do the whole thing in before trigger's which would mean you 
just use a regular old insert and the triggers do all the work, and then the 
database would entirely enforce its own integrity and rules ... no matter who 
or what was trying to insert records ...

create table data
(
keytext primary key,
data   integer not null
)
without rowid;

create trigger data_prohibit_oo_inserts before insert on data
  when julianday(new.key) <= (select julianday(max(key)) from data)
begin
select raise(ABORT, 'Out of order insert');
end;

create trigger data_prohibit_duplicates before insert on data
  when new.data IS (select data from (select max(key), data from data))
begin
select raise(IGNORE);
end;


-- insert into data values (?, ?);

insert into data values ('10:32', 12);
insert into data values ('10:35', 15);
insert into data values ('10:37', 15);
insert into data values ('10:39', 13);
insert into data values ('10:43', 13);
insert into data values ('10:46', 18);

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

insert into data values ('10:32', 12);
Error: near line 33: Out of order insert
insert into data values ('10:35', 15);
Error: near line 34: Out of order insert
insert into data values ('10:37', 15);
Error: near line 35: Out of order insert
insert into data values ('10:39', 13);
Error: near line 36: Out of order insert
insert into data values ('10:43', 13);
Error: near line 37: Out of order insert
insert into data values ('10:46', 18);

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

You could even do that with just one before trigger ...

create table data
(
keytext primary key,
data   integer not null
)
without rowid;

create trigger data_insert before insert on data
begin
select raise(ABORT, 'Out of order insert')
 where julianday(new.key) <= coalesce((select julianday(max(key)) from 
data), -1);
select raise(IGNORE)
 where new.data IS (select data from (select max(key), data from data));
end;

-- insert into data values (?, ?);

insert into data values ('10:32', 12);
insert into data values ('10:35', 15);
insert into data values ('10:37', 15);
insert into data values ('10:39', 13);
insert into data values ('10:43', 13);
insert into data values ('10:46', 18);

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

insert into data values ('10:32', 12);
Error: near line 28: Out of order insert
insert into data values ('10:35', 15);
Error: near line 29: Out of order insert
insert into data values ('10:37', 15);
Error: near line 30: Out of order insert
insert into data values ('10:39', 13);
Error: near line 31: Out of order insert
insert into data values ('10:43', 13);
Error: near line 32: Out of order insert
insert into data values ('10:46', 18);
Error: near line 33: Out of order insert

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Tuesday, 25 February, 2020 13:15
>To: SQLite mailing list 
>Subject: Re: [sqlite] Fwd: inserting new data only
>
>
>On Tuesday, 25 February, 2020 12:23, Przemek Klosowski
> wrote:
>
>>On Tue, Feb 25, 2020 at 1:18 PM Keith Medcalf 
>wrote:
>
>>> create table data
>>> (
>>> keytext primary key,
>>> data   integer not null
>>> )
>>> without rowid;
>>>
>>> -- insert into data select ?, ? as value where value IS NOT (select
>>data from (select max(key), data from data));
>>>..
>>> Constraints:
>>>
>>> (1) Will only work for appending data (new key > all keys in table)
>>> (2) Types of key and data are immaterial as long as you are only
>>inserting (appending) new keys.
>
>>Awesome---exactly what's needed.
>>The monotonicity of the time key variable is assured by how the data
>>is collected---but  is there a way to express that in sqlite?
>>create table data (
>>  key text primary key check
>>(julianday(key) > julianday(select max(key) fro

Re: [sqlite] Fwd: inserting new data only

2020-02-25 Thread Keith Medcalf

On Tuesday, 25 February, 2020 12:23, Przemek Klosowski 
 wrote:

>On Tue, Feb 25, 2020 at 1:18 PM Keith Medcalf  wrote:

>> create table data
>> (
>> keytext primary key,
>> data   integer not null
>> )
>> without rowid;
>>
>> -- insert into data select ?, ? as value where value IS NOT (select
>data from (select max(key), data from data));
>>..
>> Constraints:
>>
>> (1) Will only work for appending data (new key > all keys in table)
>> (2) Types of key and data are immaterial as long as you are only
>inserting (appending) new keys.

>Awesome---exactly what's needed.
>The monotonicity of the time key variable is assured by how the data
>is collected---but  is there a way to express that in sqlite?
>create table data (
>  key text primary key check
>(julianday(key) > julianday(select max(key) from data),
>  data integer not null);

You cannot do this with a CHECK constraint since check constraints cannot 
execute select statements (check constraints should be table invariant -- 
meaning that they must return the same result no matter what data is in the 
table or other tables, so only act to validate the data on the current row).  

This would be a case for a before insert trigger to prohibit the insert before 
it is performed (an after trigger would also work, but that would fire after 
the row is already inserted and would work by doing a statement rollback to 
delete the inserted row, so you want to avoid the row insertion completely, and 
the way to do that is with a before trigger):

create trigger data_prohibit_oo_inserts before insert on data 
  when julianday(new.key) <= (select julianday(max(key)) from data)
begin
select raise(ABORT, 'Out of order insert');
end;

This means a lookup and check after the record insertion is computed, if a 
record is to be inserted, but the btree will already be in memory and will have 
already traversed to the last entry, so this will consume CPU only, and very 
little at that.

create table data
(
keytext primary key,
data   integer not null
)
without rowid;

create trigger data_prohibit_oo_inserts before insert on data
  when julianday(new.key) <= (select julianday(max(key)) from data)
begin
select raise(ABORT, 'Out of order insert');
end;


-- insert into data select ?, ? as value where value != (select value from 
(select max(key), value from data));

insert into data select '10:32', 12 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:35', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:37', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:39', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:43', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:46', 18 as value where value IS NOT (select data 
from (select max(key), data from data));

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

insert into data select '10:32', 12 as value where value IS NOT (select data 
from (select max(key), data from data));
Error: near line 27: Out of order insert
insert into data select '10:35', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
Error: near line 28: Out of order insert
insert into data select '10:37', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
Error: near line 29: Out of order insert
insert into data select '10:39', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
Error: near line 30: Out of order insert
insert into data select '10:43', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
Error: near line 31: Out of order insert
insert into data select '10:46', 18 as value where value IS NOT (select data 
from (select max(key), data from data));

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

You could also change the insert like this:

insert into data 
 select ? as key, ? as value 
  where value IS NOT (select data from (select max(key), data from data))
and (julianday(key) > coalesce((select julianday(max(key)) from 
data),-1));

but that would just silently ignore the error rather than raising an error.  
(You need the coalesce because the "select julianday(max(key)) from data" could 
be null and it has to be non-null for the > expression to return a not null 
result (anything > null returns null which is FALSE in a where clause and NOT 
would not help since "NOT (anything > null)" is still null or false.  The 
trigger does not have this problem because a NULL result means the trigger do

Re: [sqlite] Fwd: inserting new data only

2020-02-25 Thread Keith Medcalf

Note that this will work for discrete data from a sensor but will not properly 
historize continuous data.  That is, if what you are historizing is process 
data such as a temperature, this will not permit you to re-create the original 
engineering data.  For that you need to allow the last duplicate to be stored 
and also store the computed slope to prior with each append -- in that case 
triggers would be the only way to do it.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: Keith Medcalf 
>Sent: Tuesday, 25 February, 2020 11:18
>To: 'SQLite mailing list' 
>Subject: RE: [sqlite] Fwd: inserting new data only
>
>
>create table data
>(
>keytext primary key,
>data   integer not null
>)
>without rowid;
>
>-- insert into data select (?, ? as value where value IS NOT (select data
>from (select max(key), data from data));
>
>insert into data select '10:32', 12 as value where value IS NOT (select
>data from (select max(key), data from data));
>insert into data select '10:35', 15 as value where value IS NOT (select
>data from (select max(key), data from data));
>insert into data select '10:37', 15 as value where value IS NOT (select
>data from (select max(key), data from data));
>insert into data select '10:39', 13 as value where value IS NOT (select
>data from (select max(key), data from data));
>insert into data select '10:43', 13 as value where value IS NOT (select
>data from (select max(key), data from data));
>insert into data select '10:46', 18 as value where value IS NOT (select
>data from (select max(key), data from data));
>
>select * from data;
>10:32|12
>10:35|15
>10:39|13
>10:46|18
>
>Constraints:
>
>(1) Will only work for appending data (new key > all keys in table)
>(2) Types of key and data are immaterial as long as you are only
>inserting (appending) new keys.
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Przemek Klosowski
>>Sent: Tuesday, 25 February, 2020 10:02
>>To: SQLite mailing list 
>>Subject: [sqlite] Fwd: inserting new data only
>>
>>I am storing time series data arriving from a sensor into (time,value)
>>records, like so:
>>10:32  12
>>10:35  15
>>10:37  15
>>10:39  13
>>10:43  13
>>10:46  18
>>
>>and I want to avoid storing repetitive data, so that the database should
>>contain
>>10:32  12
>>10:35  15
>>10:39  13
>>10:46  18
>>where only the earliest time with the unchanging value is stored.
>>
>>I don't see how INSERT could be conditional on e.g.  value != (select
>>value from tbl order by time descending limit 1), so I thought I'd use
>>triggers. The only way I could think of was to delete the new
>>duplicate record after it has been inserted:
>>
>>create trigger cull after insert on tbl when
>> (select value-lead(value) over (order by time desc) from a limit 1) = 0
>>begin
>>   delete from a where time like new.time;
>>end;
>>
>>Is there a simpler way?
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fwd: inserting new data only

2020-02-25 Thread Keith Medcalf

create table data
(
keytext primary key,
data   integer not null
)
without rowid;

-- insert into data select (?, ? as value where value IS NOT (select data from 
(select max(key), data from data));

insert into data select '10:32', 12 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:35', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:37', 15 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:39', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:43', 13 as value where value IS NOT (select data 
from (select max(key), data from data));
insert into data select '10:46', 18 as value where value IS NOT (select data 
from (select max(key), data from data));

select * from data;
10:32|12
10:35|15
10:39|13
10:46|18

Constraints:

(1) Will only work for appending data (new key > all keys in table)
(2) Types of key and data are immaterial as long as you are only inserting 
(appending) new keys.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Przemek Klosowski
>Sent: Tuesday, 25 February, 2020 10:02
>To: SQLite mailing list 
>Subject: [sqlite] Fwd: inserting new data only
>
>I am storing time series data arriving from a sensor into (time,value)
>records, like so:
>10:32  12
>10:35  15
>10:37  15
>10:39  13
>10:43  13
>10:46  18
>
>and I want to avoid storing repetitive data, so that the database should
>contain
>10:32  12
>10:35  15
>10:39  13
>10:46  18
>where only the earliest time with the unchanging value is stored.
>
>I don't see how INSERT could be conditional on e.g.  value != (select
>value from tbl order by time descending limit 1), so I thought I'd use
>triggers. The only way I could think of was to delete the new
>duplicate record after it has been inserted:
>
>create trigger cull after insert on tbl when
> (select value-lead(value) over (order by time desc) from a limit 1) = 0
>begin
>   delete from a where time like new.time;
>end;
>
>Is there a simpler way?
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sql update script. check for existing rows before inserting...

2020-02-23 Thread Keith Medcalf

IF is not an SQL statement.

IF is a part of your host application programming language.  It may also be 
part of a proprietary vendor specific extension to the SQL language to permit 
programmability such as the Sybase TRANSACT-SQL (licensed to Microsoft as 
Microsoft SQL Server to run on Microsoft OS/2, since IBM already had a 
relational database called IBM DB2 that ran on OS/2 and Microsoft needed 
something too (to keep up with the Jonses).  Laster, when Microsoft OS/2 2.0 
New Technology got renamed Windows NT, Microsoft still needed a database server 
for it, they "made an arrangement with Sybase to take over the defunct version 
of Sybase SQL Server and call it Microsoft SQL Server, while Sybase agreed to 
"stay away from" using the old deprecated version and leave that code line to 
Microsoft.  How many Billions of Dollars Microsoft payed Sybase for this 
purchase is undisclosed) or ORACLE PL/SQL.

Some SQL database engines have proprietary procedural extensions to SQL.  
SQLite is not one of them.

The way to insert a record if it does not exist or ignore the fact that it does 
exist is to use the IGNORE conflict resolution method.  In order for this to 
work the table must have a declared unique key by which a "duplicate" can be 
detected.

The syntax is:

INSERT OR IGNORE INTO  () VALUES ();

This will cause errors (such as unique key violations) to cause the insert 
statement to be ignored.  Contrast this with your INSERT OR REPLACE which 
deletes conflicting records then performs the insert, which is an entirely and 
completely different thing altogether.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Andy KU7T
>Sent: Sunday, 23 February, 2020 21:17
>To: SQLite mailing list 
>Subject: [sqlite] Sql update script. check for existing rows before
>inserting...
>
>Hi,
>
>I would like to write a script that checks whether certain records
>already exist, and if not, insert them. If they do exist, it should be a
>no op.
>
>I am trying this:
>
>IF (SELECT COUNT(*) FROM [Antennas]) = 0
>BEGIN
>  /* Table data [Antennas] Record count: 16 */
>  INSERT OR REPLACE INTO [Antennas]([Code], [Antenna], [Bands], [Ports],
>[Offset], [Bidirectional]) VALUES(0, '', '', null, null, '0');
>  -- a few of those...
>END;
>
>However, I am getting a syntax error near IF. Any ideas where my error
>is?
>
>Thanks
>Andy
>
>Sent from Mail for
>Windows 10
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unexpected sqlite_busy behaviour within transactions

2020-02-22 Thread Keith Medcalf

On Saturday, 22 February, 2020 09:26, Andy Bennett  
wrote:

>This other process has called "BEGIN IMMEDIATE TRANSACTION" and
>https://sqlite.org/rescode.html#busy says

>"The BEGIN IMMEDIATE command might itself return SQLITE_BUSY, but if it
>succeeds, then SQLite guarantees that no subsequent operations on the
>same database through the next COMMIT will return SQLITE_BUSY."

>As I understand it, this means that the only way this transaction will
>fail to commit is if the statements executed within it lead to some kind of
>constraint violation. i.e. it wont abort due to the actions of other
>proceses.

An immediate transaction may indeed "fail to commit".

Any statement up to and including the final COMMIT may return SQLITE_BUSY in 
the event that it is unable to upgrade its RESERVED lock to an EXCLUSIVE lock 
so that it may modify the database contents (for locking modes other than WAL 
-- WAL precludes the possibility of failure to obtain the required lock upgrade 
to EXCLUSIVE) if another connection holds a SHARED lock.  This may indefinitely 
prevent the transaction from committing.  Statements prior to the COMMIT may 
require EXCLUSIVE access to the database if they "spill their cache pages", for 
example.

However, no statement before the commit will fail for want of a RESERVED lock.  
Statements before and including the COMMIT may fail (SQLITE_BUSY) for want of 
an EXCLUSIVE lock if they need to write to the database (COMMIT or spill pages) 
and the locking mode is not WAL.

Also note that in the original example, even though the INSERT returns 
SQLITE_BUSY immediately because not doing so is pointless (and might cause a 
deadlock) does not mean that retrying the statement will not succeed.  The 
original holder of the RESERVED lock may ROLLBACK their transaction in which 
case the upgrade of the SHARED lock to RESERVED will succeed and that 
transaction might be able to proceed without having to restart its transaction.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Setting auto_vacuum=2 doesn't work after setting journal_mode=WAL

2020-02-21 Thread Keith Medcalf

On Friday, 21 February, 2020 19:36, Simon Slavin  wrote:

>On 22 Feb 2020, at 2:28am, Keith Medcalf  wrote:

>> When a database is to be created these commands must be given BEFORE
>any command which opens or creates the database:
>>
>> pragma auto_vacuum
>> pragma encoding
>> pragma page_size
>> pragma data_store_directory

>> The issuance (or preparation) of ANY OTHER COMMAND will cause a new
>> blank database to be created using the values of auto_vacuum, encoding,
>> and page_size in effect at the time that command is issued (prepared).

>The problem is, Jens points out, that this is not documented.  The
>documentation doesn't distinguish between those four PRAGMAs and other
>PRAGMAs which stop those four from working.

That is not entirely true.  All of them say that they change the format of a 
database and only work if the database does not yet exist, although in various 
variant wording.  Perhaps the wording needs to be more clear like it is for the 
"encoding" pragma which is very clear in stating that the attempt to change the 
encoding of an existing database will have no effect and therefore this command 
must be given before the database file is created in order to have effect.

In the case of the auto_vacuum and page_size pragma's, however, they DO have 
effect on a currently existing open database in particular circumstances.  

However if you want them to have effect for a newly created database you need 
to issue them before the database is created.

>I've previously suggested that the PRAGMAs should be divided into
>categories.  Perhaps this should be a new category: those which do not
>create a database but have to be done before anything that creates the
>database.

Well, that would be only one pragma, encoding.  Whether on not the same applies 
to any other pragma (page_size, auto_vacuum) depends on the intent of the 
issuer of the command.  If they are expected to affect a database which has not 
yet been created, then obviously they must be issued before the database is 
created.  If they are intended to affect the database after it is created then 
they should be issued after the database is created.  If they are issued after 
the database is created they are subject to the limitations of the operation of 
those commands on already existant databases.

Perhaps the pragma encoding, pragma auto_vacuum and pragma page_size simply 
need to say that if one wants the change to apply to a "newly created" database 
these commands must be given first, before any other command.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Setting auto_vacuum=2 doesn't work after setting journal_mode=WAL

2020-02-21 Thread Keith Medcalf

There are three pragma's which affect the "organization" of a newly created 
database.  When a database is to be created these commands must be given BEFORE 
any command which opens or creates the database:

pragma auto_vacuum
pragma encoding
pragma page_size
pragma data_store_directory

The issuance (or preparation) of ANY OTHER COMMAND will cause a new blank 
database to be created using the values of auto_vacuum, encoding, and page_size 
in effect at the time that command is issued (prepared).  The default for 
auto_vacuum is 0 or none.  auto_vacuum can be changed freely between full (1) 
and incremental (2) if the database was created with a non-zero value of 
auto_vacuum (auto_vacuum was not none at the time of database creation).  
page_size can be changed for an existing non-wal mode database by changing the 
page_size and doing a vacuum.  encoding cannot be changed and defaults to utf8.

The reference to "create tables" in the database in the documentation is a 
euphamism for "database file created" since a database will always have at 
least one table in it -- the sqlite_master table -- once the database is 
created, even if no user tables are created.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Jens Alfke
>Sent: Friday, 21 February, 2020 16:24
>To: SQLite mailing list 
>Subject: [sqlite] Setting auto_vacuum=2 doesn't work after setting
>journal_mode=WAL
>
>I just discovered that none of our databases are auto-vacuuming, even
>though we set auto_vacuum to 'incremental' when a database is created,
>and periodically call 'pragma incremental_vacuum'. If I go into the CLI,
>open a database and run "pragma auto_vacuum", it returns 0.
>
>After some experimentation I've discovered that (at least in 3.28)
>`PRAGMA auto_vacuum=incremental` has to be done as the very first thing
>after creating a database, even before `PRAGMA journal_mode=WAL`. I know
>it's documented that auto_vacuum has to be enabled before creating any
>tables, but the docs say nothing about ordering wrt other pragmas!
>
>To be precise, this is what we currently do after creating a new
>database, which does not work:
>
>sqlite3_exec(db, "PRAGMA journal_mode=WAL; "
> "PRAGMA auto_vacuum=incremental; "// ⟵ will
>have no effect
> "BEGIN; "
> "CREATE TABLE …….. ; "
> "PRAGMA user_version=302; "
> "END;");
>
>If I swap the first two statements, it does work:
>
>sqlite3_exec(db, "PRAGMA auto_vacuum=incremental; "// ⟵ will take
>effect
>  "PRAGMA journal_mode=WAL; "
>  "BEGIN; "
> "CREATE TABLE …….. ; "
> "PRAGMA user_version=302; "
> "END;");
>
>Is this expected? If so, the docs for auto_vacuum should be updated.
>
>—Jens
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is randomblob(16) a good guid generation across multiple computers?

2020-02-20 Thread Keith Medcalf

On Thursday, 20 February, 2020 22:06, Andy KU7T  wrote:

>I admit I do not fully understand all the arguments. I am running on
>Windows. Are you saying the PRNG on Windows is not good enough to use
>randomblob(16) in Sqlite? All I need is a reasonable assurance that is
>are unique...

Yes, it is reasonably random.  To improve the entropy of the seed you should 
compile the amalgamation with -DSQLITE_WIN32_USE_UUID=1 and include RPCRT4.LIB 
in the link.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is randomblob(16) a good guid generation across multiple computers?

2020-02-20 Thread Keith Medcalf

On Thursday, 20 February, 2020 11:48, Richard Hipp  wrote:

>The author of that article, "Raymond", assumes that the random number
>generator in the SQL database engine is not cryptographically strong.

Actaully, what "Raymond" is on about is the fact that the original definition 
of a GUID, according to Microsoft, was what is now called, in standard 
parlance, a Version 1 Variant 2 UUID, only without the Version and Variant 
identifiers.  It used the "clock and sequence" based on the current clock in 
huns and a sequence number to "break ties" in case you tried to have Windows 
generate more than 1 GUID per hun.  The last 48-bits were the "Unique ID" of 
the computer as generated and stored in the registry by Microsoft when Windows 
was installed (with no generated-id flag as provided in the current standard).

It was always stored in little-endian order exclusively, even on big-endian 
processors.  There was no randomness at all.  Not even the slightest.  Ever.  
It was exclusively a (more or less) sequential number.

Several years ago Microsoft decided to replace the version 1 GUID, which 
contained an identifier traceable back to the computer on which it was created, 
with a truly random Version 4 type UUID (though still without a version number 
and still without variant encoding, and still always in little endian format) 
-- apparently Microsoft software loves storing "GUIDs" thither and yon like in 
Word and Excel documents, and Microsoft's Legal Department determined that this 
was a "liability" because the Justice Department would be able to prove which 
computer was used to compose a document, spreadsheet, or email message with 
trivial effort, thus exposing the company and its executives to liability which 
could be avoided by simply using "random" GUIDs rather than "machine specific" 
GUIDs.

A new API was introduced to permit the "old fashioned sequential GUIDs linked 
inexoribly to the computer" to be generated by those that wanted to still use 
them, but the default API changed internally to now providing version 4 UUIDs 
(though still without the standard UUID version and variant flags, and still in 
exclusively little endian byte order).

So it has nothing to do with randomness.  It has to do with the fact that a 
"GUID" contains two parts:  a time stamp in UTC and a sequence number, plus the 
(hopefully) unique ID of the computer generating the GUID.  It is not random.  
It is sequential.  And the "Global Uniqueness" part is determined solely by the 
hopefully "Global Uniqueness" of the machine identifier which created the GUID.

Only later did the "GUID" contain randomness by default though Windows was 
still capable of generating the old sequential GUIDs.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is randomblob(16) a good guid generation across multiple computers?

2020-02-20 Thread Keith Medcalf

randomblob(16) generates 16 random bytes.

randomblob(16) does not generate a valid UUID (it does not set the version and 
variant flags in the resulting 16-bytes of random data).  If you set the 
version to 4 and the variant to 1 then randomblob(16) does produce valid 
version 4 uuids with 122-bits of randomness.  The ext/misc/uuid.c extension 
does this, for example, generating valid version 4 variant 1 UUIDs.

The only other useful type of UUID to generate would be version 1.  Version 1 
is a pseudo-sequential clock based value in which the last 48-bits are the MAC 
address of the computer (or some fixed identifier of 48-bits for the computer) 
with the variant set appropriately (thus chopping a couple of bits) and the MAC 
type (thus chopping another bit, to identify whether the origin is a "real MAC 
unicast address" or a "fake ID -- multicast MAC address").  The clock and 
sequence is merely the current clock count plus a sequence number of the 
generated UUID.  Less the 4 bit version which is set to 1.

Microsoft does not generate valid UUIDs (either version 1 or version 4).  They 
do not have the version and variant set properly and are stored "little endian" 
rather than in network byte order.  If you pass a "standards based" UUID to a 
"Microsoft" renderer you will get a different result than if you pass the same 
UUID bytes to a standards compliant renderer.

Both version 1 and version 4 UUIDs are probabilisticly Universally Unique.  
Version 1 because the single-source generator uses a theoretically unique 
machine ID, and version 4 because hopefully the random 122-bits are in fact 
122-bits of entropy.

So really it boils down to a question of how you want these UUIDs to be 
represented.  There is the "Microsoft way" and the "Microsoft way" is 
incompatible with the "standard".  So if you choose the "standard" way, then 
you will have to forgo the "Microsoft way" and use only standard compliant 
handlers (and therefore will have standard compliant UUIDs on all platforms).  
Conversely, if you choose the "Microsoft way" then you will be limited to only 
ever being compliant with the "Microsoft way" and limited to Microsoft 
platforms.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Andy KU7T
>Sent: Thursday, 20 February, 2020 11:12
>To: SQLite mailing list 
>Subject: [sqlite] Is randomblob(16) a good guid generation across
>multiple computers?
>
>Hi,
>I added a randomblob(16) to each record of a Sqlite table via a trigger
>with the goal of global uniqueness. Is that the correct approach or would
>it be better to pass Guid from .Net? I am using System.Data.Sqlite. The
>following article got me questioning the usage of randomblob:
>https://devblogs.microsoft.com/oldnewthing/20120523-00/?p=7553
>
>Thanks
>Andy
>
>Sent from my T-Mobile 4G LTE Device
>Get Outlook for Android
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question about INTEGER PRIMARY KEY AUTOINCREMENT

2020-02-20 Thread Keith Medcalf

On Wednesday, 19 February, 2020 21:24, ethan he  wrote:

>There is a SQLITE DATABASE has “MeslocallD”(INTEGER PRIMARY KEY
>AUTOINCREMENT),
>Is that possible to delete the data but still keep the MeslocallD
>consistence?

Assuming that by "consistence" you mean the high-water mark for inserted rowid 
then the answer is yes.  The high water mark rowid used for a table is stored, 
for each autoincrement rowid table, in a special table called sqlite_sequence.

If you delete all the data from the table, the sqlite_sequence will not be 
changed.  However, if you drop the table, the entry for that table will be 
removed from sqlite_sequence.  If you will be dropping the table and need to 
"remember" the high-water mark, you can select it from the sqlite_sequence 
table and re-insert it after re-creating the table.

sqlite> create table x(x integer primary key autoincrement);
sqlite> insert into x values (1);
sqlite> select * from sqlite_sequence;
x|1
sqlite> drop table x;
sqlite> select * from sqlite_sequence;
sqlite> create table x (x integer primary key autoincrement);
sqlite> select * from sqlite_sequence;
sqlite> insert into sqlite_sequence values ('x', 1);
sqlite> insert into x values (null);
sqlite> select * from x;
2
sqlite> select * from sqlite_sequence;
x|2

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why do I only get one record?

2020-02-19 Thread Keith Medcalf

1)  In the first two query's, why do you have a condition on the LHS table in 
the LEFT JOIN conditions?
2)  In the last query, why do you have a condition on the RHS table of the LEFT 
JOIN in the WHERE clause?

These would seem to indicate that you are using a LEFT JOIN when you really do 
not want a LEFT JOIN (maybe you are a LEFT lover?) ... it is a common ailment.

Also, you are only getting one row because you only asked for one row.  If you 
request an aggregate and specify no GROUP BY then you can only ever get one row 
as a result -- the one aggregate row.

If you want more than one row you need to specify by what you want the results 
grouped in the GROUP BY clause.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Jose Isaias Cabrera
>Sent: Wednesday, 19 February, 2020 12:46
>To: SQLite mailing list 
>Subject: [sqlite] Why do I only get one record?
>
>
>Greetings!
>
>Newbie here again... ;-)
>
>Please take a look at this query,
>
>sqlite> SELECT
>   ...> a.ProjID, a.Start_date, a.Finish_Date, a.Target_Go_Live_Date,
>   ...> a.BL_Start, a.BL_Finish, sum(b.AnnualDossier) as Dossier,
>   ...> sum(b.Actuals) as ABTActual, sum(b.EAC) as ABTEAC
>   ...> FROM Project_List AS a
>   ...> LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID
>   ...>   AND b.InsertDate = (SELECT MAX(InsertDate) FROM ABT_Budget
>WHERE ProjID = b.ProjID)
>   ...> AND
>   ...> a.InsertDate =
>   ...> (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID =
>a.ProjID)
>   ...> WHERE a.ProjID IN
>   ...> (
>   ...> 'PR018284',
>   ...> 'PR015544'
>   ...> )
>   ...> ORDER BY a.ProjID;
>PR015544|2019-01-01|2020-01-01||
>sqlite>
>
>Why do I only get one row?  Because if I do this other query,
>
>sqlite> SELECT
>   ...> a.ProjID, a.Start_date, a.Finish_Date, a.Target_Go_Live_Date,
>   ...> a.BL_Start, a.BL_Finish, sum(b.AnnualDossier) as Dossier,
>   ...> sum(b.Actuals) as ABTActual, sum(b.EAC) as ABTEAC
>   ...> FROM Project_List AS a
>   ...> LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID
>   ...>   AND b.InsertDate = (SELECT MAX(InsertDate) FROM ABT_Budget
>WHERE ProjID = b.ProjID)
>   ...> AND
>   ...> a.InsertDate =
>   ...> (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID =
>a.ProjID)
>   ...> WHERE a.ProjID IN
>   ...> (
>   ...> 'PR018284'
>   ...> )
>   ...> ORDER BY a.ProjID;
>PR018284|2020-01-01|2020-03-01||
>sqlite>
>
>That project exists.  And if I do this other query,
>sqlite> SELECT
>   ...> a.ProjID, a.Start_date, a.Finish_Date, a.Target_Go_Live_Date,
>   ...> a.BL_Start, a.BL_Finish, sum(b.AnnualDossier) as Dossier,
>   ...> sum(b.Actuals) as ABTActual, sum(b.EAC) as ABTEAC
>   ...> FROM Project_List AS a
>   ...> LEFT JOIN ABT_Budget AS b ON a.ProjID = b.ProjID
>   ...>   AND b.InsertDate = (SELECT MAX(InsertDate) FROM ABT_Budget
>WHERE ProjID = b.ProjID)
>   ...> WHERE
>   ...> a.InsertDate =
>   ...> (SELECT MAX(InsertDate) FROM Project_List WHERE ProjID =
>a.ProjID)
>   ...> AND a.ProjID = b.ProjID
>   ...> ;
>PR013109|2017-11-13|2019-04-30|2018-10-14|2017-11-13|2019-01-
>31|76605061.443927|76125541.48|149733051.57
>sqlite>
>
>I still only get one.  It looks like I am setting a limit, but that is
>not true.  Any help would be greatly appreciated.  Thanks.
>
>josé
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] last_insert_rowid() returns every growing list

2020-02-18 Thread Keith Medcalf

On Monday, 17 February, 2020 17:20, The Tick  wrote:

>I'm running tcl 8.6.8 on win7x64. I built the latest sqlite Tcl package
>with the 3310100 source using mingw gcc under msys2.

>Everything seems to work but I ran into a strange result with
>last_insert_rowid().

>The following example returns an ever-growing list of rowid's:

>-8x-
>package require sqlite3
>console show
>
>#file delete -force test.db
>sqlite3 sql test.db
>#sql eval {create table test (id integer primary key, number integer,
>data blob)}
>set a [clock seconds]
>set b [expr {[clock seconds] % 100}]
>set c [string repeat "\u3456" 25]
>sql eval { insert into test (id, number, data) values( $a, $b, $c ) }
># The next statement returns a list... but shouldn't it only have one
>element?
>set oid [sql eval { select last_insert_rowid() from test }]
>puts "\[[llength $oid]\] $oid"
>-8x-

>If the "file delete" and "create table" are uncommented, I only get a
>single rowid the first time of course.

>I am a complete sql novice so it's likely that it's a user error but I
>thought I'd ask.

last_insert_rowid() is an SQL Function that returns the last inserted rowid on 
the connection.  You have asked for that value to be returned as many times as 
there are rows in the table test.  If there is one row in the table test you 
get the last_insert_rowid() value returned once.  If there are 47583 rows in 
table test you get that value returned for each of the 47583 rows.

select last_insert_rowid();
is to
select 1;

as

select last_insert_rowid() from test;
is to
select 1 from test;


-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it possible to dump a sqlite db that has an associated -wal file?

2020-02-16 Thread Keith Medcalf

On Sunday, 16 February, 2020 10:25, Richard Hipp  wrote:

>> Why the database can not be read by another sqlite3 session when the
>> corresponding -wal file exists? Thanks.

>Because Firefox uses "PRAGMA locking_mode=EXCLUSIVE;"

Perhaps on some platforms, but Firefox 73.0.0 on Windows 10 18636.657 does not 
-- or if it does, then it doesn't work properly as I can still read/write the 
various database files while Firefox is running.

However, when I open a database and set locking_mode=exclusive then connection 
to the same database cannot access the database.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it possible to dump a sqlite db that has an associated -wal file?

2020-02-15 Thread Keith Medcalf

On Saturday, 15 February, 2020 19:27, Peng Yu  wrote:

>I am trying to see what tables are available in sqlite_master from
>firefox cookies sqlite3 fire.

>~/Library/Application
>Support/Firefox/Profiles/jaseom4q.default-1480119569722/cookies.sqlite

>But the error message says "Error: database is locked".

>I see a cookies.sqlite-wal file in the same directory.

>Is there a way to inspect this db file without closing firefox?

Does it work when you close Firefox?  If it works when Firefox is closed but 
not when Firefox is open, then the answer is probably no.  If you still get an 
error message when Firefox is closed, then you probably have too old a version 
of whatever version of SQLite3 you are using to look at the database.

I can, but then I am on Windows and using the latest mostest up-to-date version 
of the SQLite3 command line tool.  

There are rumours on the various message boards that Firefox opens the 
databases with exclusive access to prevent fiddling.  Not having looked at the 
relevant Firefox source code, I have no clue whether or not this is accurate.  
However the rumoured exclusive access seems to "go away" when an up-to-date 
version of SQLite3 is used to look at the databases.  Plus, of course, if the 
access is supposed to be exclusive then it is broken on Windows.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Searching by Guid index in table (.NET)? I can write and read, but not search by the Guid

2020-02-15 Thread Keith Medcalf

While that is nice, it is likely completely irrelevant.  The issue appears to 
be the flamboyant conversion of data being performed by dotSnot (.NET).

The fact that data can be stored in several different formats inside the 
database, and then converted to what dotSnot calls a "GUID" is all very nice 
and interesting, but not useful in the least (at least not if you want the 
database engine to do any work, rather than doing all joining and filtering and 
ordering in the dotSnot application).

What you need to know is how the thing that dotSnot calls a GUID is presented 
to the database.  If dotSnot sends a GUID to the database as a 16-byte binary 
blob, then you better store your GUIDs in the database as 16-byte binary blobs 
if you expect to be able to find them.  Similarly, if dotSnot sends a GUID to 
the database as a text string formatted with brockets and dashes and lower case 
hex digits, then you better store your GUIDs in the database in that format if 
you ever expect to be able to find them.

In other words, it is irrelevant that the conversion from database to dotSnot 
will interpret 400 different types of things as a dotSnot GUID.  What is 
important is how dotSnot sends what it thinks calls a GUID to the database, and 
that it can do that reversibly.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Simon Slavin
>Sent: Saturday, 15 February, 2020 13:56
>To: SQLite mailing list 
>Subject: Re: [sqlite] Searching by Guid index in table (.NET)? I can
>write and read, but not search by the Guid
>
>On 15 Feb 2020, at 8:12pm, J Decker  wrote:
>
>> memcmp is harder than strcmp?
>> and with blob I suppose you don't have 'if UTF8 or UTF16 do
>> different things' so should compare slightly faster?
>
>It's not cut-and-dried, but you can take things into consideration.
>
>Once data is in a database it usually just sits there.  You're mostly
>interested in input and output.
>
>Think of the number of times you're going to do conversions, and what
>formats you're converting from and to.  How much processing is involved
>in each conversion.
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multithreaded app with in-memory database and transactions

2020-02-13 Thread Keith Medcalf

On Thursday, 13 February, 2020 17:58, Jim Dodgen  wrote:

>I have often wondered what the performance difference is between /dev/shm
>and :memory: databases

Theoretically a :memory: database is faster than a /dev/shm stored database.  A 
:memory: database is purely in memory and has no extra connection cache (and no 
transactions to speak of).  A database in /dev/shm is just like an on-disk 
database, and the connection has its own memory cache, the difference being 
that there will never be a wait to flush the OS cache to disk (the file exists 
only in the OS file cache).  Using a /dev/shm file will still require file 
read/write because it is a file as far as the application is concerned.

A file on /dev/shm will release all its space back to the OS free pool when the 
file is deleted but will persist until reboot when closed by the application 
(and not deleted).  A :memory: database will release its memory back to the OS 
when it is detached/closed.  A database opened on a RAMDISK will behave like a 
database on /dev/shm except that deleting the file will not release the space 
allocated for the RAMDISK back to the OS since the RAMDISK usually permanently 
allocates virtual storage for its contents when created.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multithreaded app with in-memory database and transactions

2020-02-13 Thread Keith Medcalf

On Thursday, 13 February, 2020 17:06, Jim Dodgen  wrote:

>I have placed databases on/in /dev/shm and shared them across both
>threads and processes.

Yeah, /dev/shm is a pre-existing tmpfs filesystem, separate from the one 
mounted on /tmp.  I keep forgetting about that one ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multithreaded app with in-memory database and transactions

2020-02-13 Thread Keith Medcalf

Correct.  "memory" databases can only be shared between connections in the same 
process, and then only by the sharedcache method.  In effect, a "memory" 
database is nothing more than a cache, and sharing it between connections means 
sharing the cache.  cache=private uses a separate cache for the connection and 
therefore (by definition) a different "memory" database.

You could, of course, use an "on disk" database where the database resides on a 
memory resident filesystem.  On Linux you would simply mount another filesystem 
using tmpfs and put your "on disk" database there.  Delete the "file" off the 
tmpfs filesystem when you are done with it.  (Or just use /tmp if it is mounted 
using tmpfs which it usually is.  Just beware that everyone has access to /tmp 
and if you mount your own tmpfs at a different location you can control the 
access permissions).

On Windows the closest thing I know of that can create an autosizing ramdisk is 
http://winramtech.atwebpages.com/RAMDriv/ramdriv.htm though I have not tried it.

For other OS's, Google is your friend!

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Wim Hoekman
>Sent: Thursday, 13 February, 2020 11:44
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] multithreaded app with in-memory database and
>transactions
>
>I have an app which is multithreaded. Sometimes during lengty inserts a
>different thread (which only reads the database) sees part of the
>updated data.
>
>This would be solved by using transactions. However, during the
>transaction the "reading" thread gets a 'database table is locked' error.
>
>In my case, transactions work fine when using on-disk database file, but
>not when using an in-memory database.
>
>In my code, I open the database twice, so I have two separate database
>connections: one for updating, one for reading.
>
>I've tried several ways of opening the in-memory database:
>
>1) "file:memdb?cache=shared=memory",
>SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   does not work, 'database table is locked' error
>
>2) "file:memdb?cache=private=memory",
>SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   does not work, "different" databases
>   data added via one db connection is not visible when querying via
>other
>db conncection
>
>3) "file:filedb?cache=shared", SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   does not work, 'database table is locked' error
>
>4) "file:filedb?cache=private", SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   works, but is not in memory
>
>5) "file:filedb", SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>does not work, 'database table is locked' error
>
>   (because I called sqlite3_enable_shared_cache( 1 )! )
>
>6) "file::memory:?cache=shared", SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   same behaviour as 1)
>
>7) "file::memory:?cache=private", SQLITE_OPEN_READWRITE|SQLITE_OPEN_URI
>   same behaviour as 2)
>
>It seems I can not get 2 database connections to the same in-memory
>database, without using a shared cache.
>And having a shared cache causes 'database table is locked' errors.
>
>Did I miss something, or is multithreaded transactions with an in-memory
>database not supported?
>Any hints or tips to get this working are appreciated.
>
>Regards,
>
>Wim.
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .timer explanation anywhere

2020-02-13 Thread Keith Medcalf

On Windows the GetProcessTimes Win32 API is used to get the user and kernel 
(sys) times for the current process since getrusage only exists on unix-like 
platforms.  In all cases the precision and accuracy are limited by the 
underlying OS timer accuracy.  

The vfs call to get the current time is limited to milliseconds notwithstanding 
that the underlying precision and accuracy of the underlying OS call may be 
different than 1 millisecond.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] .timer explanation anywhere

2020-02-13 Thread Keith Medcalf

According to the code in shell.c the .timer on/off sets a flag that tells 
whether you want timer data printed or not, and then for each statement:
 
if .timer is turned on
  save the current wall clock and getrusage times (usr and sys times)
execute the statement.
if .timer is turned on
  get the new wall clock and getrsuage times
  display the difference between the new and old values


sort of like you would know how long X() took (in wall clock elapsed seconds) 
if you did:

begin = time();
X();
finish = time();
elapsed = finish - begin;

Just that .timer saves and reports three values provided by the underlying 
Operating System, not just one.  "real" is the current time (in seconds) 
reported by the underlying OS, and user/sys are the underlying times in seconds 
reported by the Operating System getrusage call for user/sys CPU usage times 
for the current process.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Jose Isaias Cabrera
>Sent: Thursday, 13 February, 2020 06:48
>To: SQLite mailing list 
>Subject: [sqlite] .timer explanation anywhere
>
>
>Greetings.
>
>I was searching on sqlite.org for [sqlite command line tool .timer
>explanation] and found nothing. I also searched on the internet and found
>an old thread[1] of when .timer had just two entries:
>
>CPU Time: user 880.710398 sys 353.260288
>
>And, although, there is some good information there, I would like for us
>to have a solid answer somewhere. :-)  Maybe even explain it on the site
>[2], or better yet, have an option on .timer (on|off|?) to explain each
>piece of the output. One-liners will suffice.
>
>I also found this other interesting post [3], which I think is not
>totally correct, but I will let you guys explain why it is true.  Or, at
>least, if it has some truth in it.
>
>Thanks for your support.
>
>josé
>
>[1] http://sqlite.1065341.n5.nabble.com/timer-on-in-the-shell-tool-
>td79626.html
>[2]
>https://www.sqlite.org/cli.html#special_commands_to_sqlite3_dot_commands_
>[3] https://stackoverflow.com/questions/40329106/how-to-measure-the-
>execution-time-of-each-sql-statement-query-in-sqlite
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JSON_EACH + recursive query = unexpected performance degradation

2020-02-12 Thread Keith Medcalf

The easiest way is to phrase the query such that the table is internally 
materialized.  In query1.sql the easiest way to do that is to change the:

WITH
build a in memory table with parent-child relations 
from 3 json arrays _which have the same size_
"objects_in_memory" AS (
   SELECT "id"."value" AS "id",

to read like this:

WITH
build a in memory table with parent-child relations 
from 3 json arrays _which have the same size_
"objects_in_memory" AS (
   SELECT DISTINCT "id"."value" AS "id",

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of  ?
>Sent: Monday, 10 February, 2020 23:28
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] JSON_EACH + recursive query = unexpected performance
>degradation
>
>I have a dataset of about 300 rows which have parent-child relations.
>Due to factors unrelated to the issue I build the rows by zipping JSON
>arrays with values from each column.
>Then I run a simplest recursive query on it to get the whole tree(ends
>up being ~4 levels).
>
>Problem: the query takes 3000 ms (3 seconds) on my machine to complete.
>If I create a real table with the SAME structure, insert the SAME data
>into it, and run the SAME query, get the SAME result back, it takes
>10-15 ms (200-300 TIMES faster).
>
>
>I attached both queries(don't require schema to run)




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please increase the default for SQLITE_MAX_VARIABLE_NUMBER

2020-02-10 Thread Keith Medcalf

On Monday, 10 February, 2020 14:36, Simon Slavin  wrote:

>Does this problem affect unnumbered indexes too ?  In other words if I
>have

>(?,?,?,?,?)

>and bind to the fifth one using the index do I have the same problems as
>having

>(?1,?2,?3,?4,?5)

>and bind to the fifth one using its number ?

What particular problem are you referring to?  When you use a "parameter" in an 
SQL statement, then during prepare time the total number of unique parameters 
(and the highest used index) is recorded.  An array must be allocated attached 
to the "statement" containing enough entries to allow all these parameters to 
be used.  So if you use a parameter "?437" then the array size will be 
allocated of sufficient size to hold 437 parameters.

In addition, whenever a "named parameter" is used (?N, :name, @name, $name) 
then the parameter name is entered into yet another data structure which 
records which "index position" corresponds to that name.  This is so that the 
sqlite3_bind_parameter_name and sqlite3_bind_parameter_index can map between 
the name and the number.

So if you use
(?,?,?,?,?)
then an array of 5 parameters will be required, and no name<->index mapping 
VList will be created.

However, if you use:
(?1,?2,?3,?4,?5)
then an array of 5 parameters will be required, and a name<->index mapping 
VList will be created for all 5 of the named parameters.

If you use:
(?,?,?,?5,?4)
then an array of 5 parameters will be required, and a name<->index mapping 
VList will be created for the 2 named parameters (?5 and ?4).

If you were to use this:
(?,:t,?5,?2)
then an array of 5 parameters will be required (the max index used is 5, even 
though the parameter at index 4 is not referenced) and the name<->index mapping 
VList will contain ":t"<->2, "?5"<->5. The "?2" name will not be recorded since 
parameter index 2 already has a name.

Perhaps in the case of ?N parameters the extra step of recording the name could 
be skipped since all "not otherwise named" ?N parameters will always have name 
?N and index N.  But at the moment the name is recorded in the VList for ?N 
parameters if no name is registered for the parameter at index N.  There is 
probably a reason for this though I don't know what it is.

The generated VDBE program will always refer to parameters by their index and 
not by their names.  The name<->index VList table is kept attached to the 
statement for use by the sqlite3_bind_parameter_name and 
sqlite3_bind_parameter_index API functions.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to group this?

2020-02-09 Thread Keith Medcalf

That's good, but this not screw up later userid/date if an entry is AWOL.

WITH systolic 
  AS (
  select userid,
 date,
 rank() over (partition by userid, date order by id) as rank, 
 reading 
from pressure 
   where term == 'Systolic'
 ),
 diastolic 
  AS (
  select userid,
 date,
 rank() over (partition by userid, date order by id) as rank, 
 reading 
from pressure 
   where term == 'Diastolic'
 )
SELECT d.userid,
   d.date,
   s.reading || '/' || d.reading AS Reading
  FROM systolic s
  JOIN diastolic d
ON d.userid == s.userid
   AND d.date == s.date
   AND d.rank == s.rank
;



-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of no...@null.net
>Sent: Sunday, 9 February, 2020 18:26
>To: SQLite mailing list 
>Subject: Re: [sqlite] How to group this?
>
>On Mon Feb 10, 2020 at 12:24:33AM +, Bart Smissaert wrote:
>> I should get:
>>
>> 127/81
>> 132/82
>> 141/85
>> 143/94
>>
>> What should be the SQL to group like this?
>
>Here is one way that appears to generate the correct result.
>
>CREATE TABLE pressure(
>id INTEGER PRIMARY KEY,
>userid INTEGER NOT NULL,
>date TEXT NOT NULL,
>term text NOT NULL,
>reading INTEGER NOT NULL
>);
>
>INSERT INTO
>pressure(userid,date,term,reading,id)
>VALUES
>(1308,'15/Mar/2013','Systolic',127,701559),
>(1308,'15/Mar/2013','Diastolic',81,701568),
>(1308,'27/Jun/2013','Systolic',132,701562),
>(1308,'27/Jun/2013','Systolic',141,701563),
>(1308,'27/Jun/2013','Systolic',143,701564),
>(1308,'27/Jun/2013','Diastolic',82,701571),
>(1308,'27/Jun/2013','Diastolic',85,701572),
>(1308,'27/Jun/2013','Diastolic',94,701573),
>(278975701,'08/Mar/2018','Systolic',136,1583551),
>(278975701,'08/Mar/2018','Diastolic',99,1583591),
>(278975701,'04/Apr/2018','Systolic',119,1583552),
>(278975701,'04/Apr/2018','Systolic',124,1583553),
>(278975701,'04/Apr/2018','Systolic',130,1583554),
>(278975701,'04/Apr/2018','Diastolic',74,1583592),
>(278975701,'04/Apr/2018','Diastolic',75,1583593),
>(278975701,'04/Apr/2018','Diastolic',85,1583594);
>
>WITH
>systolic AS (
>SELECT
>row_number() OVER (order by id) AS row,
>*
>FROM
>pressure
>WHERE
>term='Systolic'
>),
>diastolic AS (
>SELECT
>row_number() OVER (order by id) AS row,
>*
>FROM
>pressure
>WHERE
>term='Diastolic'
>)
>SELECT
>d.userid AS User,
>d.date AS Date,
>s.reading || '/' || d.reading AS Reading
>FROM
>systolic s
>LEFT JOIN
>diastolic d
>ON
>d.row=s.row
>ORDER BY
>d.userid,
>d.date,
>d.id
>;
>
>I'm sure others will produce more elegant solutions - I was just using
>your question as an exercise to learn about window functions. The query
>plan of the above looks a little scary though:
>
> id   parent  notused  detail
> 20   0CO-ROUTINE 1
> 92   0CO-ROUTINE 4
> 12   9   0SCAN TABLE pressure
> 26   2   0SCAN SUBQUERY 4
> 72   0   0MATERIALIZE 2
> 79   72  0CO-ROUTINE 5
> 82   79  0SCAN TABLE pressure
> 97   72  0SCAN SUBQUERY 5
> 144  0   0SCAN SUBQUERY 1 AS s
> 159  0   0SEARCH SUBQUERY 2 AS d USING AUTOMATIC
>COVERING INDEX
>   (row=?)
> 180  0   0USE TEMP B-TREE FOR ORDER BY
>
>Presumably if you index the date/userid and filter the CTE tables it
>gets a little better.
>
>--
>Mark Lawrence
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to group this?

2020-02-09 Thread Keith Medcalf
  select id, 
 entry_date, 
 max(case when term == 'Systolic' then reading end) as Systolic, 
 max(case when term == 'Diastolic' then reading end) as Diastolic 
from the_table
group by id, entry_date
;

should be

  select id, 
 entry_date, 
 max(case when term == 'Systolic' then reading end)
 || '/' ||
 max(case when term == 'Diastolic' then reading end) as Pressure
from the_table
group by id, entry_date
;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Sunday, 9 February, 2020 19:17
>To: SQLite mailing list 
>Subject: Re: [sqlite] How to group this?
>
>  select id,
> entry_date,
> max(case when term == 'Systolic' then reading end) as Systolic,
> max(case when term == 'Diastolic' then reading end) as Diastolic
>from the_table
>group by id, entry_date
>;
>
>If you want to make sure you have both terms for a given id/entry_date
>then you need to make it a subquery so you can check for null.
>
>select id,
>   entry_date,
>   systolic || '/' || diastolic as pressure
>  from (
>  select id,
> entry_date,
> max(case when term == 'Systolic' then reading end) as
>Systolic,
> max(case when term == 'Diastolic' then reading end) as
>Diastolic
>from the_table
>group by id, entry_date
>   )
>  where systolic is not null
>and diastolic is not null
>;
>
>Note that (id, entry_date, term) needs to be unique ... or you will just
>get the combination of the max(systolic)/max(diastolic) for the results
>for a given id/entry_date.
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Bart Smissaert
>>Sent: Sunday, 9 February, 2020 17:25
>>To: General Discussion of SQLite Database >us...@mailinglists.sqlite.org>
>>Subject: [sqlite] How to group this?
>>
>>ID ENTRY_DATE TERM NUMERIC_VALUE ROWID
>>
>>1308 15/Mar/2013 Systolic 127 701559
>>1308 15/Mar/2013 Diastolic 81 701568
>>1308 27/Jun/2013 Systolic 132 701562
>>1308 27/Jun/2013 Systolic 141 701563
>>1308 27/Jun/2013 Systolic 143 701564
>>1308 27/Jun/2013 Diastolic 82 701571
>>1308 27/Jun/2013 Diastolic 85 701572
>>1308 27/Jun/2013 Diastolic 94 701573
>>278975701 08/Mar/2018 Systolic 136 1583551
>>278975701 08/Mar/2018 Diastolic 99 1583591
>>278975701 04/Apr/2018 Systolic 119 1583552
>>278975701 04/Apr/2018 Systolic 124 1583553
>>278975701 04/Apr/2018 Systolic 130 1583554
>>278975701 04/Apr/2018 Diastolic 74 1583592
>>278975701 04/Apr/2018 Diastolic 75 1583593
>>278975701 04/Apr/2018 Diastolic 85 1583594
>>
>>These are systolic and diastolic blood pressures for 2 people with the
>>ID's
>>1308 and 278975701, ordered by ID asc, ENTRY_DATE asc, ROWID asc.
>>Systolic and diastolic values are a pair and should be grouped in one
>>row.
>>This is no problem if there is only one pair for one date, but sometimes
>>there multiple pairs per date.
>>The pairing should be based on the rowed if there are multiple pairs by
>>date, so for ID 1308
>>I should get:
>>
>>127/81
>>132/82
>>141/85
>>143/94
>>
>>What should be the SQL to group like this?
>>
>>RBS
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to group this?

2020-02-09 Thread Keith Medcalf
  select id, 
 entry_date, 
 max(case when term == 'Systolic' then reading end) as Systolic, 
 max(case when term == 'Diastolic' then reading end) as Diastolic 
from the_table
group by id, entry_date
;

If you want to make sure you have both terms for a given id/entry_date then you 
need to make it a subquery so you can check for null.

select id, 
   entry_date, 
   systolic || '/' || diastolic as pressure
  from (
  select id, 
 entry_date, 
 max(case when term == 'Systolic' then reading end) as 
Systolic, 
 max(case when term == 'Diastolic' then reading end) as 
Diastolic 
from the_table
group by id, entry_date
   )
  where systolic is not null
and diastolic is not null
;

Note that (id, entry_date, term) needs to be unique ... or you will just get 
the combination of the max(systolic)/max(diastolic) for the results for a given 
id/entry_date.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Bart Smissaert
>Sent: Sunday, 9 February, 2020 17:25
>To: General Discussion of SQLite Database us...@mailinglists.sqlite.org>
>Subject: [sqlite] How to group this?
>
>ID ENTRY_DATE TERM NUMERIC_VALUE ROWID
>
>1308 15/Mar/2013 Systolic 127 701559
>1308 15/Mar/2013 Diastolic 81 701568
>1308 27/Jun/2013 Systolic 132 701562
>1308 27/Jun/2013 Systolic 141 701563
>1308 27/Jun/2013 Systolic 143 701564
>1308 27/Jun/2013 Diastolic 82 701571
>1308 27/Jun/2013 Diastolic 85 701572
>1308 27/Jun/2013 Diastolic 94 701573
>278975701 08/Mar/2018 Systolic 136 1583551
>278975701 08/Mar/2018 Diastolic 99 1583591
>278975701 04/Apr/2018 Systolic 119 1583552
>278975701 04/Apr/2018 Systolic 124 1583553
>278975701 04/Apr/2018 Systolic 130 1583554
>278975701 04/Apr/2018 Diastolic 74 1583592
>278975701 04/Apr/2018 Diastolic 75 1583593
>278975701 04/Apr/2018 Diastolic 85 1583594
>
>These are systolic and diastolic blood pressures for 2 people with the
>ID's
>1308 and 278975701, ordered by ID asc, ENTRY_DATE asc, ROWID asc.
>Systolic and diastolic values are a pair and should be grouped in one
>row.
>This is no problem if there is only one pair for one date, but sometimes
>there multiple pairs per date.
>The pairing should be based on the rowed if there are multiple pairs by
>date, so for ID 1308
>I should get:
>
>127/81
>132/82
>141/85
>143/94
>
>What should be the SQL to group like this?
>
>RBS
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] vsv module documentation

2020-02-08 Thread Keith Medcalf

Simon,

I hope you don't mind me sending this to you directly, but what do you think of 
the following as the VSV documentation:

/*
** 2020-02-08 modified by Keith Medcalf who also disclaims all copyright
** on the modifications and hereby places this code in the public domain
**
** This file contains the implementation of an SQLite virtual table for
** reading VSV (Variably Separated Values), which are like CSV files,
** but subtly different.  VSV supports a number of extensions to the
** CSV format as well as more processing options.
**
**
** Usage:
**
**  create virtual table temp.vsv using vsv(...);
**  select * from vsv;
**
** The parameters to the vsv module (the vsv(...) part) are as follows:
**
**  filename=STRING the filename, passed to the Operating System
**  data=STRING alternative data
**  columns=N   columns parsed from the VSV file
**  header=BOOL whether or not a header row is present
**  skip=N  number of leading data rows to skip
**  rsep=STRING record separator
**  fsep=STRING field separator
**  validatetext=BOOL   validate UTF-8 encoding of text fields
**  affinity=AFFINITY   affinity to apply to each returned value
**
**
** Defaults:
**
**  filename / data nothing.  You must provide one or the other
**  it is an error to provide both or neither
**  schema  nothing.  If not provided then one will be
**  generated for you from the header, or if no
**  header is available then autogenerated using
**  field names manufactured as cX where X is the
**  column number
**  columns nothing.  If not specified then the number of
**  columns is determined by counting the fields
**  in the first record of the VSV file (which
**  will be the header row if header is specified),
**  the number of columns is not parsed from the
**  schema even if one is provided
**  header=no   no header row in the VSV file
**  skip=0  do not skip any data rows in the VSV file
**  fsep=','default field separator is a comma
**  rsep='\n'   default record separator is a newline
**  validatetext=no do not validate text field encoding
**  affinity=none   do not apply affinity to each returned value
**
**
** Parameter types:
**
**  STRING  means a quoted string
**  N   means a whole number not containing a sign
**  BOOLmeans something that evaluates as true or false
**  it is case insensitive
**  yes, no, true, false, 1, 0
**  AFFINITYmeans an SQLite3 type specification
**  it is case insensitive
**  none, blob, text, integer, real, numeric
**
** STRING means a quoted string.  The quote character may be either
** a single quote or a double quote.  Two quote characters in a row
** will be replaced with a single quote character.  STRINGS do not
** need to be quoted if it is obvious where they begin and end
** (that is, they do not contain a comma).  Leading and trailing
** spaces will be trimmed from unquoted strings.
**
**filename =./this/filename.here, ...
**filename =./this/filename.here , ...
**filename = ./this/filename.here, ...
**filename = ./this/filename.here , ...
**filename = './this/filename.here', ...
**filename = "./this/filename.here", ...
**
**  are all equivalent.
**
** BOOL defaults to true so the following specifications are all the
** same:
**
**  header = true
**  header = yes
**  header = 1
**  header
**
**
** Specific Parameters:
**
** The platform/compiler/OS fopen call is responsible for interpreting
** the filename.  It may contain anything recognized by the OS.
**
** The separator string containing exactly one character, or a valid
** escape sequence.  Recognized escape sequences are:
**
**  \t  horizontal tab, ascii character 9 (0x09)
**  \n  linefeed, ascii character 10 (0x0a)
**  \v  vertical tab, ascii character 11 (0x0b)
**  \f  form feed, ascii character 12 (0x0c)
**  \xhhspecific byte where hh is hexadecimal
**
** For the affinity setting, the following processing is applied to
** each value returned by the VSV virtual table:
**
**  noneno affinity is applied, all fields will be
**  returned as text just like in the original
**  csv module, embedded nulls will terminate
**  the text.  if validatetext is in effect then
**  an error will be thrown if the field does
**  not contain validly encoded text or contains
**  embedded nulls
**
**  blob

Re: [sqlite] [EXTERNAL] Re: Patch: VTable Column Affinity Question and Change Request (WIthdrawn)

2020-02-07 Thread Keith Medcalf

I don't think that patch to apply affinities is a good idea since it will 
usually be mostly useless and will negatively impact performance since one can 
and should assume that the actual author of the VTable knows what they are 
doing, currently SQLite3 does not enforce declared VTable column affinities and 
it should stay that way.  That said, although the definitions of the column 
affinities are stored in the internal schema, it is not trivial to access them 
from a non-builtin extension.

Instead I have once again changed the vsv.c VTable plugin so that it does some 
more useful things.

The below parameters work exactly as they did in the csv.c extension with 
regards to setting the functionality with the single exception that now if you 
specify all of header, schema and columns, the header row will still be 
skipped.  In csv.c if you specified both schema and columns then the header row 
was not skipped even if you specified it existed.

filename=FILENAME is the csv filename in quotes.
data=DATA provides the data inline.
header=BOOL   will "parse" the first row to get the column names unless a 
SCHEMA is specified.
schema=STRING allows the specification of the schema to use
columns=N number of columns in VTable


New parameters:


skip=N 

will skip the specified number of rows at the start of the CSV file, exclusive 
of the header row (ie, skip data rows). If the header, via the header option, 
is specified to exist then it will always be skipped. 


fsep=SEP and rsep=SEP

allow to specify the particular separators to be used.  The defaults are 
fsep=',' and rsep='\n' so the default is the same as the original.  The 
separator can be any single character, or it can be a supported escape code (\t 
\v \n \f for horizontal-tab, vertical-tab, newline and form-feed), or it can be 
a hexadecimal coded byte as in '\x1f'.  Multibyte separators cannot be 
specified.


affinity=AFF 

where the affinity can be NONE, BLOB, TEXT, INTEGER, REAL, NUMERIC with the 
default being NONE.  NONE does nothing different from the existing csv.c VTable 
(unless you also specify validatetext).  However, if you specify an affinity it 
applies to *every* returned value and makes the following changes:
 - BLOB returns all data as BLOB
 - TEXT returns all data as TEXT
 - INTEGER returns anything that looks like an integer as an INTEGER otherwise 
as TEXT
 - REAL returns anything that looks like a number as a REAL (double) otherwise 
as TEXT
 - NUMERIC returns anything that looks like a number as either an INTEGER or 
REAL otherwise as TEXT
   - if the field "looks like an integer" then an INTEGER is returned
   - if the field "looks like a number" then a REAL is returned unless it can 
be converted to an integer
"looks like an integer" effectively means passing the following regex "^ 
*[+-]?\d+ *$"
"looks like a number" effectively means passing the following regex "^ 
*[+-]?(\d+\.?\d*|\d*\.?\d+)([eE][+-]?\d+)? *$"

Processing is limited by the runtime C library so the detection is not quite so 
good as the affinities built into the SQLite3 core.  This means that something 
that looks like an integer may fail to load as an integer because it is too big 
(918273745985736365575984857636253857564363 looks like an integer and the 
SQLite3 core will recognize this and load it as a double.  The C library is not 
so smart and will probably simply return MAX_INT).

Processing is also limited by the compiler.  Some compilers (wave at Microsoft) 
do not treat long double as being more precise than a regular double (ie, 10 or 
16 bytes rather than 8) and treat that simply as a plain old double (some 
compilers, like Intel, need a compilation option to enable long doubles being 
actually longer than a regular double).  It is detected if this is the case 
(sizeof(long dounble)==sizeof(double)) so that only "integers" that can be 
entirely contained in the mantissa are converted to integer, and others will 
remain as double.


validatetext=BOOL

will cause TEXT fields to be validated as containing a valid UTF-8 coding 
sequence (no content check is performed, only the validity of the encoding is 
checked).  If this is turned on together with any type of affinity (ie, other 
than none) then improperly encoded UTF8 text will be returned as a BLOB.  If no 
affinity is in effect (ie, affinity=none) then an error will be thrown rather 
then letting a text field contain invalid data.  This means that fields 
containing embedded nulls (rather than encoded 0 bytes) will not be able to be 
stored as TEXT and will either be BLOB type or throw an invalid encoding error.


nulls=BOOL

when enabled empty fields will be returned as NULL rather than empty strings.  
An empty field means a field with no content (separators are side-by-each).  
Specifically empty strings are not changed.  That is, the middle column in this 
a,,b is a null column but this is not a,"",b

-- 
The fact that there's a Highway to Hell but only a 

Re: [sqlite] VTable Column Affinity Question and Change Request

2020-02-05 Thread Keith Medcalf

On Wednesday, 5 February, 2020 18:10, Jens Alfke :

>> On Feb 5, 2020, at 9:58 AM, Keith Medcalf  wrote:

>> It seems that "column affinities" are not respected in Virtual Table
>> implementations -- that is the value that is returned is the datatype
>> provided by the the vtab_cursor sqlite3_result_* function and the "column
>> affinity" from the vtab declaration is not applied.

> The vtab implementation is responsible for generating the CREATE TABLE
> statement and passing it to sqlite3_declare_vtab(). It’s also responsible
> for returning column values. So I think the assumption is that it’s up to
> the implementation to be self-consistent, i.e. returning column values
> that match the declaration.

That would make perfect sense except that the documentation for the 
sqlite_vtab_declare function specifically states that only the column name and 
type affinity are used, and that other things (ie, constraints, defaults, etc) 
that may be supplied in a vtab declaration are ignored.  Why go to all the 
bother of parsing the affinity and allowing it to be supplied if it is not used?

> I haven’t used the CSV vtable. It looks as though its CREATE VIRTUAL
> TABLE statement takes a ‘schema’ parameter containing the SQL table
> declaration, that it then passes straight through to
> sqlite3_declare_vtab(). It probably doesn’t parse that declaration or
> figure out from it what the declared types of the columns are.

Actually it does, and this is documented for the sqlite3_declare_vtab function 
-- and in fact the column names and affinities are parsed and stored in the 
internal Table schema.  The column names are used but the executed VDBE program 
does not "apply affinity".

>In other words this looks like a limitation of the CSV implementation,
>which is perhaps unusual in that it is not in control of the table schema
>it declares.

Yes it is, but easily fixed.  As far as I can tell it should not affect vtabs 
that are part of the SQLite3 distribution, but I do not know if there are 
third-party virtual tables that specify column affinities in the vtab 
declarations rather than just use blob (or no affinity).

I would expect that most vtab writers write consistent code and this is would 
not be an issue at all.  The CSV vtab is kind of a special case where it is 
returning arbitrary external data over which it has no control so the 
application of affinity is probably worthwhile.  In cases where it does not 
matter what the actual prefered value affinity is the vtab can still be 
declared as blob affinity (ie, no affinity applied).

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Patch: VTable Column Affinity Question and Change Request

2020-02-05 Thread Keith Medcalf

Patch to Fix Column Affinity not applied to Virtual Columns.

In expr.c function sqlite3ExprCodeGetColumnOfTable
At or about line 3555 where the OP_Column or OP_VColumn opcode is added to the 
VDBE program, and the default code is generated, make this:

sqlite3VdbeAddOp3(v, op, iTabCur, x, regOut);
sqlite3ColumnDefault(v, pTab, iCol, regOut);

look like this:

sqlite3VdbeAddOp3(v, op, iTabCur, x, regOut);
if ( IsVirtual(pTab) && pTab->aCol[iCol].affinity > SQLITE_AFF_BLOB)
  sqlite3VdbeAddOp4(v, OP_Affinity, regOut, 1, 0, 
&(pTab->aCol[iCol].affinity), 1);
sqlite3ColumnDefault(v, pTab, iCol, regOut);

Of course, it may be that the writer of the VTable should know what they are 
doing and generate a VTable definition that is consistent with how their cursor 
methods return data, however ... this will omit the OP_Affinity if no column 
type was specified when the VTable was defined and most of the VTable 
declarations in the existing code that I looked at do not specify column 
affinities in the declarations.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Wednesday, 5 February, 2020 10:58
>To: SQLite Users (sqlite-users@mailinglists.sqlite.org) us...@mailinglists.sqlite.org>
>Subject: [sqlite] VTable Column Affinity Question and Change Request
>
>
>It seems that "column affinities" are not respected in Virtual Table
>implementations -- that is the value that is returned is the datatype
>provided by the the vtab_cursor sqlite3_result_* function and the "column
>affinity" from the vtab declaration is not applied.  In effect the column
>affinity specified in the vtab declaration seems to be ignored (or
>treated as none/blob) no matter what the declaration.
>
>Somehow, I don't think this was always the case but I could be wrong.  In
>any case, what is the point in specifying the column affinity in the vtab
>declaration if it is just going to be ignored?
>
>Example, using the current tip of trunk and the ext\misc\csv.c extension
>with the following input file:
>
>a,b,c,d
>1,2,3,4
>2,3 or 4,4,5
>3,4,5,6
>4,5,6,7
>
>SQLite version 3.32.0 2020-02-05 16:13:24
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> create virtual table t using csv(filename='t.csv', header=on);
>sqlite> .mode col
>sqlite> .head on
>
>sqlite> select typeof(a), a, typeof(b), b, typeof(c), c, typeof(d), d
>from t;
>
>typeof(a)   a   typeof(b)   b   typeof(c)   c
>typeof(d)   d
>--  --  --  --  --  --  -
>-  --
>text1   text2   text3
>text4
>text2   text3 or 4  text4
>text5
>text3   text4   text5
>text6
>text4   text5   text6
>text7
>
>sqlite> pragma table_xinfo(t);
>
>cid nametypeaff collnotnull
>dflt_value  pk  rowid   autoinc hidden
>--  --  --  --  --  --  -
>-  --  --  --  --
>-1  INTEGER 0
>1   1   0   1
>0   a   TEXTTEXT0
>0   0   0   0
>1   b   TEXTTEXT0
>0   0   0   0
>2   c   TEXTTEXT0
>0   0   0   0
>3   d   TEXTTEXT0
>0   0   0   0
>
>sqlite> drop table t;
>sqlite> create virtual table t using csv(filename='t.csv', header=off,
>schema='create table t(a numeric, b numeric, c numeric, d numeric)');
>
>sqlite> select typeof(a), a, typeof(b), b, typeof(c), c, typeof(d), d
>from t;
>
>typeof(a)   a   typeof(b)   b   typeof(c)   c
>typeof(d)   d
>--  --  --  --  --  --  -
>-  --
>texta   textb   textc
>textd
>text1   text2   text3
>text4
>text2   text3 or 4  te

[sqlite] VTable Column Affinity Question and Change Request

2020-02-05 Thread Keith Medcalf

It seems that "column affinities" are not respected in Virtual Table 
implementations -- that is the value that is returned is the datatype provided 
by the the vtab_cursor sqlite3_result_* function and the "column affinity" from 
the vtab declaration is not applied.  In effect the column affinity specified 
in the vtab declaration seems to be ignored (or treated as none/blob) no matter 
what the declaration.

Somehow, I don't think this was always the case but I could be wrong.  In any 
case, what is the point in specifying the column affinity in the vtab 
declaration if it is just going to be ignored?

Example, using the current tip of trunk and the ext\misc\csv.c extension with 
the following input file:

a,b,c,d
1,2,3,4
2,3 or 4,4,5
3,4,5,6
4,5,6,7

SQLite version 3.32.0 2020-02-05 16:13:24
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create virtual table t using csv(filename='t.csv', header=on);
sqlite> .mode col
sqlite> .head on

sqlite> select typeof(a), a, typeof(b), b, typeof(c), c, typeof(d), d from t;

typeof(a)   a   typeof(b)   b   typeof(c)   c   
typeof(d)   d
--  --  --  --  --  --  
--  --
text1   text2   text3   text
4
text2   text3 or 4  text4   text
5
text3   text4   text5   text
6
text4   text5   text6   text
7

sqlite> pragma table_xinfo(t);

cid nametypeaff collnotnull 
dflt_value  pk  rowid   autoinc hidden
--  --  --  --  --  --  
--  --  --  --  --
-1  INTEGER 0   
1   1   0   1
0   a   TEXTTEXT0   
0   0   0   0
1   b   TEXTTEXT0   
0   0   0   0
2   c   TEXTTEXT0   
0   0   0   0
3   d   TEXTTEXT0   
0   0   0   0

sqlite> drop table t;
sqlite> create virtual table t using csv(filename='t.csv', header=off, 
schema='create table t(a numeric, b numeric, c numeric, d numeric)');

sqlite> select typeof(a), a, typeof(b), b, typeof(c), c, typeof(d), d from t;

typeof(a)   a   typeof(b)   b   typeof(c)   c   
typeof(d)   d
--  --  --  --  --  --  
--  --
texta   textb   textc   text
d
text1   text2   text3   text
4
text2   text3 or 4  text4   text
5
text3   text4   text5   text
6
text4   text5   text6   text
7

sqlite> pragma table_xinfo(t);

cid nametypeaff collnotnull 
dflt_value  pk  rowid   autoinc hidden
--  --  --  --  --  --  
--  --  --  --  --
-1  INTEGER 0   
1   1   0   1
0   a   numeric NUMERIC 0   
0   0   0   0
1   b   numeric NUMERIC 0   
0   0   0   0
2   c   numeric NUMERIC 0   
0   0   0   0
3   d   numeric NUMERIC 0   
0   0   0   0

(note that the pragma table_xinfo is my slightly modified version that shows 
some additional information from the schema object)

If I put the rows generated by the virtual table into a similarly declared temp 
table, I get the expected result:

sqlite> create temporary table u(a numeric, b numeric, c numeric, d numeric);
sqlite> insert into u select * from t;

sqlite> select typeof(a), a, typeof(b), b, typeof(c), c, typeof(d), d from u;

typeof(a)   a   typeof(b)   b   typeof(c)   c   
typeof(d)   d
--  --  --  --  --  --  
--  --
texta   text   

Re: [sqlite] csv extension/feature requests: new options for field separator different from default ', ' and skip=N to skip first N lines

2020-02-05 Thread Keith Medcalf
On Tuesday, 4 February, 2020 22:31, Keith Medcalf  wrote:

The vsv.c (source) file line counting is now verified and I have added a skip= 
parameter.  Adding skip was far easier than variable separators ...

Same location, file updated:  http://www.dessus.com/files/vsv.c
The complete collection:  http://www.dessus.com/files/sqlite3extensions.zip

Using tbartilde.csv containing:
a|b|c|d~1|2|3|4~2|"3|5"|4|5~3|4|5|6~4|5|6|7

SQLite version 3.32.0 2020-02-05 05:21:57
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create virtual table t using vsv(filename='tbartilde.csv', header=on, 
fsep='|', rsep='~');
sqlite> .mode col
sqlite> .head on
sqlite> select * from t;
a   b   c   d
--  --  --  --
1   2   3   4
2   3|5 4   5
3   4   5   6
4   5   6   7

sqlite> drop table t;
sqlite> create virtual table t using vsv(filename='tbartilde.csv', header=on, 
fsep='|', rsep='~', skip=3);
sqlite> select * from t;
a   b   c   d
--  --  --  --
4   5   6   7

sqlite> drop table t;
sqlite> create virtual table t using vsv(filename='tbartilde.csv', header=on, 
fsep='|', rsep='~', skip=4);
sqlite> select * from t;
sqlite> drop table t;
sqlite> create virtual table t using vsv(filename='tbartilde.csv', header=on, 
fsep='|', rsep='~', skip=5);
Error: premature end of file during skip
sqlite>



>On Tuesday, 4 February, 2020 05:19, Robert M. Münch 
> wrote:
>
>>On 3 Dec 2019, at 16:10, Jannick wrote:
>
>>> would it be possible to add to the csv extension the following
>>> parameter options (with syntax along the lines of):
>
>>> - sep=';': field separator character (different from default ',')
>>> - skip=N: skip the first N lines
>
>>> Both options would be very helpful for information of fixed format
>>> downloaded from information providers.
>
>>Hi, those would be very useful enhancements.
>
>I did some of the changes and you can get the resulting extension called
>VSV (Variably Separated Values) from:
>
>http://www.dessus.com/files/vsv.c
>
>I added the facility to specify the field and record separator
>characters.  I did not add skip, though I may look at adding that too,
>but one can simply use the OFFSET in SQL to ignore some rows at the
>beginning of the file.  I made some other changes also which makes this
>non-compliant with the RFC.
>
>Data between the "field separator" markers can consist of any arbitrary
>string of bytes that DOES NOT include the field or record separator
>bytes.
>Data between the double-quotes can consist of any arbitrary string of
>bytes except that double-quotes must be escaped by doubling them.
>
>The added parameters are fsep=SEPERATOR and rsep=SEPERATOR for the field
>and record seperators respectively.
>
>SEPERATOR is a single quoted string that may be in the following formats:
>
>'x'where x is any arbitrary byte and will be used as the separator
>character.
>'\x'   for standard escape codes (tab = \t, vtab = \v, Formfeed = \f,
>Newline = \n).
>'\xhh' where hh is the hexidecimal code for the byte to use.
>
>defaults if not specified are fsep=',' and rsep='\n'
>
>so to read the following file:
>
>a|b|c|d~1|2|3|4~2|"3|5"|4|5~3|4|5|6~4|5|6|7
>
>you can use the following commands:
>
>SQLite version 3.32.0 2020-02-05 02:43:27
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> create virtual table tbartilde using
>vsv(filename='tbartilde.csv', header=on, fsep='|', rsep='~');
>sqlite> .mode col
>sqlite> .head on
>sqlite> select * from tbartilde;
>a   b   c   d
>--  --  --  --
>1   2   3   4
>2   3|5 4   5
>3   4   5   6
>4   5   6   7
>
>Might still have some line counting errors and haven't figured out how to
>implement skip yet ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] csv extension/feature requests: new options for field separator different from default ', ' and skip=N to skip first N lines

2020-02-04 Thread Keith Medcalf

On Tuesday, 4 February, 2020 05:19, Robert M. Münch 
 wrote:

>On 3 Dec 2019, at 16:10, Jannick wrote:

>> would it be possible to add to the csv extension the following
>> parameter options (with syntax along the lines of):

>> - sep=';': field separator character (different from default ',')
>> - skip=N: skip the first N lines

>> Both options would be very helpful for information of fixed format
>> downloaded from information providers.

>Hi, those would be very useful enhancements.

I did some of the changes and you can get the resulting extension called VSV 
(Variably Separated Values) from:

http://www.dessus.com/files/vsv.c

I added the facility to specify the field and record separator characters.  I 
did not add skip, though I may look at adding that too, but one can simply use 
the OFFSET in SQL to ignore some rows at the beginning of the file.  I made 
some other changes also which makes this non-compliant with the RFC.

Data between the "field separator" markers can consist of any arbitrary string 
of bytes that DOES NOT include the field or record separator bytes.
Data between the double-quotes can consist of any arbitrary string of bytes 
except that double-quotes must be escaped by doubling them.

The added parameters are fsep=SEPERATOR and rsep=SEPERATOR for the field and 
record seperators respectively.

SEPERATOR is a single quoted string that may be in the following formats:

'x'where x is any arbitrary byte and will be used as the separator 
character.
'\x'   for standard escape codes (tab = \t, vtab = \v, Formfeed = \f, Newline = 
\n).
'\xhh' where hh is the hexidecimal code for the byte to use.

defaults if not specified are fsep=',' and rsep='\n'

so to read the following file:

a|b|c|d~1|2|3|4~2|"3|5"|4|5~3|4|5|6~4|5|6|7

you can use the following commands:

SQLite version 3.32.0 2020-02-05 02:43:27
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create virtual table tbartilde using vsv(filename='tbartilde.csv', 
header=on, fsep='|', rsep='~');
sqlite> .mode col
sqlite> .head on
sqlite> select * from tbartilde;
a   b   c   d
--  --  --  --
1   2   3   4
2   3|5 4   5
3   4   5   6
4   5   6   7

Might still have some line counting errors and haven't figured out how to 
implement skip yet ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please increase the default for SQLITE_MAX_VARIABLE_NUMBER

2020-02-04 Thread Keith Medcalf

On Tuesday, 4 February, 2020 17:23, J. King  wrote:

>Not everyone has access to carrays and intarrays, either, such as PHP
>users like myself.

Then you should probably be creating a temporary table and using that/

begin immediate;
create temporary table inlist(x primary key(x)) without rowid;
insert into temp.inlist values (?); -- for each value you want in your IN list, 
one after each
commit;
select ... from ... where x IN (select x from temp.inlist);
drop table temp.inlist;


In Python one would do something like:

bloodybiglist = [...]
cursor.execute('begin immediate;')
cursor.execute('create temporary table inlist(x primary key(x)) without rowid;')
cursor.executemany('insert into temp.inlist values (?);', list(tuple((x,)) for 
x in bloodybiglist))
cursor.execute('commit;')
cursor.execute('select ... from ... where x in (select x from inlist);')
cursor.execute('drop table temp.inlist;')

In other interface wrappers you would still have to bind the parameters one at 
a time.  Cannot you prepare a statement in PHP (eg, the insert statement) and 
loop though binding the parameters and executing the prepared statement one 
binding parameter by each?

eg:

stmt = prepare('insert into temp.inlist values (?);')
for item in bloodybiglist:
  stmt.bind(stmt, 1, item)
  stmt.execute()
  
Which is what executemany does behind the curtain ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please increase the default for SQLITE_MAX_VARIABLE_NUMBER

2020-02-04 Thread Keith Medcalf

On Tuesday, 4 February, 2020 12:14, Deon Brewis  wrote:

>WHERE x IN (?1,?2,?3,?4...,?1000 )

That would be a really silly construct to use.  Why are you bothering to name 
all the parameters?  Anonymous parameters are merely an array of pointers to 
values.  When you give the parameters names then a linkage between the "name" 
and the "position" needs to be kept, as well as a hash table so that the "name" 
can be looked up.  When you refer to duplicate anonymous parameters you have to 
use a name for the one of them that is not the next anonymous parameter in 
line, but sheesh, naming them all?  Why?

>And the IN clause is filled by a list or array that's held inside the
>calling application memory rather than in SQLITE.

The VDBE program still needs an array of pointers for all used parameter range 
(from 1 to the highest parameter used).  That means that if you use something 
like:

select ?, ?100;

then the VDBE program will allocate an array to hold 100 parameter pointers 
(assuming that number of parameters were allowed).  And bind parameters are 
only stored in application memory if they are TEXT or BLOB type and you make 
the bind call providing a de-allocator (ie, not SQLITE_TRANSIENT) and even then 
if and only if no conversions need to be performed (for example from you 
external encoding to the internal database encoding).

>The alternate to this is to create a virtual table wrapper over the
>internal datasets of the app. Which is of course better, but harder. (We
>need an STL for SQLite. SqliteTL?).

see the carray extension ...

>PS: Doesn't SQLITE internally order an IN list and do a join across it?
>It seems to perform better than I would expect from a flat array.

Sort of.  When you do an IN (?,?,?,?) or IN (value, value, value ...) the 
values or parameters are loaded one after each into a without rowid table 
(effectively, that looks like "create temporary table temptable (variable 
primary key variable) without rowid") and then the table.variable IN 
(...list...) is treated as a "table JOIN temptable ON table.variable == 
temptable.variable".  NULLS in the IN list are silently discarded of course 
since they cannot be used with IN (which is defined as == not IS for each in 
turn).

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unsafe use of virtual table

2020-02-04 Thread Keith Medcalf

This is part of the trusted schema.  

Virtual Tables and Functions can be labeled as DIRECT_ONLY, INNOCUOUS, or 
unlabeled.  

INNOCUOUS virtual tables and functions can be used anywhere they are allowed 
including in the schema and views and indexes and so forth (provided that they 
would otherwise be permitted in that location).  DIRECT_ONLY virtual tables and 
functions can only be used from top-level SQL, ever.

Unlabeled virtual tables and views depend on whether or not the schema is 
trusted.  If the schema containing those things (in views, the schema 
definitions, etc) is untrusted, then those unlabeled virtual tables and 
functions are treated as DIRECT_ONLY.  If the schema is trusted, then there are 
no restrictions on the use of unlabeled virtual tables and functions.  The TEMP 
schema is always trusted since it must have always been created by the 
application/user and cannot have been a "crafted part" of the database.

The dbstat virtual table is DIRECT_ONLY meaning that since 3.30.0 it can only 
be used in top-level (directly issued) SQL and not in a view, even a view 
created in the temp database.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of tom-sql...@pettymail.com
>Sent: Tuesday, 4 February, 2020 14:35
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] unsafe use of virtual table
>
>Hi,
>
>I have noticed a change between 3.30 and 3.31.1 and searched for more
>info on "unsafe use of virtual table" on sqlite.org but could not find
>anything relevant.
>
>In 3.30:
>
>SQLite version 3.30.0 2019-10-04 15:03:17
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> create view somestats as select name, pgoffset from dbstat;
>sqlite> select * from somestats;
>sqlite_master|0
>
>
>But in 3.31.1:
>
>SQLite version 3.31.1 2020-01-27 19:55:54
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> create view somestats as select name, pgoffset from dbstat;
>sqlite> select * from somestats;
>Error: unsafe use of virtual table "dbstat"
>
>Could someone point me to where I can find more info on unsafe use of
>virtual tables?
>
>Thanks
>Tom
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there something like PATH to search for extension libraries?

2020-01-31 Thread Keith Medcalf

On Friday, 31 January, 2020 21:15, Peng Yu  wrote:

>I have to specify either a full path (either relative or absolution)
>to use .load. But it would be more convenient if there is something
>like PATH (may be named as something like SQLITE3_LIB_PATH) to search
>for library files. Is it available in sqlite3? Thanks.

>https://www.sqlite.org/loadext.html

This is an Operating System matter, not an SQLite3 issue.

On Windows loadext (.load) uses the standard search path.  I would imagine that 
Linux uses whatever the standard .so file search path is.

See http://man7.org/linux/man-pages/man8/ld.so.8.html
and https://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html

Basically put the library in a standard location and use ldconfig else try 
setting the LD_LIBRARY_PATH

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] log() in sqlite3

2020-01-31 Thread Keith Medcalf

On Friday, 31 January, 2020 17:59, Peng Yu  wrote:

>How to use extension-functions.c? It means that I have to compile it?

Yes.  Either as a loadable extension or as core builtin functions extending the 
amalgamation.

>How to use it with python?

db.load_extension() 
for each connection db into which you want to load the extension.

>For python, create_function should be a better solution? Thanks.

Mayhaps yes, mayhaps no.  

Depends on your definition of "better".  If you mean "simpler" then the answer 
is yes.  If you mean many orders of magnitude slower than the C version, then 
the answer is also yes.

>>> import sqlite3
>>> db=sqlite3.connect(':memory:')
>>> def log(*arg):
... from math import log
... return log(*arg)
...
>>> db.create_function('log', -1, log)
>>> import math
>>> db.execute('select log(1000)').fetchone()
(6.907755278982137,)
>>> math.log(1000)
6.907755278982137
>>> db.execute('select log(1000,10)').fetchone()
(2.9996,)
>>> math.log(1000,10)
2.9996
>>> db.execute('select log(1000,2)').fetchone()
(9.965784284662087,)
>>> math.log(1000,2)
9.965784284662087

sqlite3 also does not let you set a function as deterministic, while APSW does, 
functions defined in python using the sqlite3 wrapper are somewhat limited in 
where you can use them.  APSW also lets you write virtual tables and vfs's in 
python, should you wish.  I don't think sqlite3 can do that.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to yield an error when opening a nonexistent dbfile?

2020-01-31 Thread Keith Medcalf

You could use the second method (opening the file by handle) if you do not want 
your code to be portable.

Yes, APSW is far superior to sqlite3.  It does not have any "magic" and wraps 
SQLite3 into Python so that it works like SQLite3 works, so the interface works 
as documented for the equivalent "C" code on the sqlite.org website (or on the 
APSW documentation website at https://rogerbinns.github.io/apsw/ and the 
documentation is MUCH more complete than the paltry stuff available for the 
pysqlite2/sqlite3 wrapper).  

The author of APSW also happens to be on this mailing list, and APSW is 
generally updated shortly after a new version of SQLite3 is released (or you 
can compile it yourself if you wish).  If you find a bug it is also highly 
likely that it will be fixed (unlike the pysqlite2/sqlite3 wrapper in Python 
which has not been updated in years and is 20-odd versions behind its parent 
project code).  APSW can be found here:  https://github.com/rogerbinns/apsw

With APSW you simply pass the open flags you want to use on the Connection 
constructor just like you would do in C.

About the only things that APSW does not have that the sqlite3 does is the Row 
object and the data adapters and converters.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Friday, 31 January, 2020 17:49
>To: SQLite mailing list 
>Subject: Re: [sqlite] Is there a way to yield an error when opening a
>nonexistent dbfile?
>
>What is the recommended way to do so in python?
>
>I see the following two ways.
>
>```
>db = sqlite3.connect('file:/path/to/database?mode=ro', uri=True)
>
>fd = os.open(filename, os.O_RDONLY)
>c = sqlite3.connect('/dev/fd/%d' % fd)
>os.close(fd)
>```
>
>https://stackoverflow.com/questions/10205744/opening-sqlite3-database-
>from-python-in-read-only-mode
>
>But can the first have problems when the filename is actually
>something like 'file:database?mode=ro'?
>
>For my applications, I only unix like platform. So the second way may be
>better?
>
>Any other better solutiosn with the python sqlite3 library?
>
>The stackoverflow link also mentions APSW. It seems that it is quite
>updated. Is this a good replacement for the python sqlite3 module in
>general?
>
>Thanks.
>
>> $ sqlite3 -readonly testx.db
>> SQLite version 3.31.1 2020-01-27 19:55:54
>> Enter ".help" for usage hints.
>> sqlite> .tables
>> Error: unable to open database "testx.db": unable to open database file
>
>--
>Regards,
>Peng
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check constrain execution timing change? (Now a bug)

2020-01-31 Thread Keith Medcalf

On Friday, 31 January, 2020 14:39, Simon Slavin  wrote:

>On 31 Jan 2020, at 9:27pm, Keith Medcalf  wrote:

>> You are however correct that this is an "application consistency"
>problem more than an SQLite problem and it is a lot of change for little
>actual benefit.

>How about this ?

>A program (possibly part of the SQLite precompiled suite, possibly from a
>third party) goes through any SQLite database as an integrity checker.
>One of its jobs is to check that column constraints are not violated.
>This cannot possible be wrong.  A SQLite database with violated CHECK
>clauses is, by definition, corrupt.

pragma integrity_check and pragma quick_check already do this.

>However, because the checks are performed on the values input, not the
>values stored, some data in the database does violate a CHECK constraint.

>Can this happen, given the behaviour Keith identified ?  If so, I would
>say that something is wrong.

Yes.  It is possible to "craft" a table and a check constraint such that the 
INSERT passes the constraint yet the integrity check fails:

sqlite> create table x(x text not null check (typeof(x) == 'integer'));
sqlite> insert into x values (1);
sqlite> select x, typeof(x) from x;
1|text
sqlite> pragma integrity_check;
CHECK constraint failed in x

That would elevate this to the status of a bug since it should be impossible to 
do this.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check constrain execution timing change?

2020-01-31 Thread Keith Medcalf

On Friday, 31 January, 2020 13:58, Richard Hipp  wrote:

>On 1/31/20, Keith Medcalf  wrote:

>> The check should occur AFTER defaults and
>> column affinity is applied before the data record is stored

>Why do you think this?  Is it documented somewhere?  I ask because
>your test case gives the same answer (doing the first insert but
>failing the other two) for every version of SQLite I checked from
>trunk going back to 3.1.0 (2005-01-21).  Hence, for backwards
>compatibility, even it is documented to do something differently, I
>should probably change the documentation rather than the behavior.

You are absolutely correct in that this is the way SQLite has worked forever, 
and is the way it is documented (or not, I am not sure).  However, this means 
that the result of a CHECK/NOT NULL contstraint differs between an INSERT and 
when run on existing data (via pragma integrity_check).  This means that, for 
example, the construct

CHECK (typeof(x) in ('integer', 'real'))

enforces different table invariants between UPDATE/INSERT and when the data is 
already in the table since in the case of INSERT/UPDATE it checks the type of 
the value input to the UPDATE/INSERT rather than the value actually stored and 
prohibits the INSERT/UPDATE of values that would pass the table checks later.

You are however correct that this is an "application consistency" problem more 
than an SQLite problem and it is a lot of change for little actual benefit.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Check constrain execution timing change?

2020-01-31 Thread Keith Medcalf
>collect input data row
>apply column affinity
>fire before triggers
>apply defaults, generated always, rowid etc.
>apply column affinity to above columns
>run constraints
>store actul row
>fire after triggers

Actually, with generated columns it is a bit more complicated.  I think:

collect input data row
run generated column logic including not allowing specification of values for 
generated columns
apply column affinity
fire before triggers
apply defaults, rowid etc.
apply column affinity to defaults, rowid, etc. changed above
run constraints
store actul row
fire after triggers

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>collect input data row
>apply column affinity
>fire before triggers
>apply defaults, generated always, rowid etc.
>apply column affinity to above columns
>run constraints
>store actul row
>fire after triggers




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Check constrain execution timing change?

2020-01-31 Thread Keith Medcalf

I believe that when inserting a row into a table the CHECK constraints (which 
includes any NOT NULL constraint) are checked at the wrong time, or at least 
with the wrong data.  The check should occur AFTER defaults and column affinity 
is applied before the data record is stored, meaning that the constraints 
should apply to the row as actually stored.  Consider:

create table x (x integer default ('1') check (typeof(x) == 'integer'));
insert into x values (1);
insert into x values ('1');
insert into x default values;

The last two statements raise constraint errors, even though the values that 
end up in the database would in fact pass the constraint:

create table x (x integer default ('1'));
insert into x values (1);
insert into x values ('1');
insert into x default values;
select x, typeof(x) from x;

1|integer
1|integer
1|integer

Similarly for the NOT NULL column constraint.  If should apply to the data 
actually stored, not the contents of the 'insert' the value may be modified 
before it is actually inserted (the working of the PRIMARY KEY on a ROWID 
table, or a default clause, for example) may modify the value before it is 
stored.

Presently, the NOT NULL column constraint is to the input data, and not to the 
row actually stored:

sqlite> create table x(x integer not null default (1));
sqlite> insert into x values (NULL);
Error: NOT NULL constraint failed: x.x

which also raises a NOT NULL constraint error even though the value stored will 
not be NULL and therefore passes the constraint.

I do not know exactly where before triggers fire, but they should probably fire 
directly on the input data after affinity is applied and before the check 
contraints run.  You could then use before triggers to limit or require 
specifying NULL inputs on an insert even if those NULL values would be changed 
to a default or computed value afterwards.

ie, the processing for inserting a record should be:

collect input data row
apply column affinity
fire before triggers
apply defaults, generated always, rowid etc.
apply column affinity to above columns
run constraints
store actul row
fire after triggers

I don't know if this would constitute a breaking change, but I don't think so 
...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Generated columns and COLLATE in the AS parens

2020-01-31 Thread Keith Medcalf
 generated column—that’s very unique to generated
>columns.
>  E.g. specifying a DEFAULT clause does not lift the requirement to
>explicitly state the type of the column.
>
>- If a generated column doesn’t have an explicit data type mentioned, the
>the data type of the result of the expression is used.
>  Whatever data type that is, whatever properties that type has.
>  If it happens to be a character string type, it will also have a
>“collation” property.
>
>However, the declared type of the expression (including its properties)
>is only relevant if the column definition doesn’t explicitly state a data
>type.
>
>These are two different cases in standard SQL:
>
>  C1 TEXT GENERATED ALWAYS AS (…)
>  C2  GENERATED ALWAYS AS (…)
>
>The type of C1 is TEXT, including all its default properties.
>The type of C2 is the type of the result of the expression, including all
>its properties.
>
>The “what’s new” paper for SQL:2003 mentions that case explicitly.
>
>http://sigmodrecord.org/publications/sigmodRecord/0403/E.JimAndrew-
>standard.pdf
>
>The example on the last page:
>
>> CREATE TABLE EMPLOYEES (
>>  EMP_ID INTEGER,
>>  SALARY DECIMAL(7,2),
>>  BONUS DECIMAL(7,2),
>>  TOTAL_COMP GENERATED ALWAYS AS (SALARY + BONUS)
>>  )
>>
>> TOTAL_COMP is a generated column of the EMPLOYEES table. The data type
>of the TOTAL_COMP is the data type of the expression (SALARY_BONUS).
>
>I think that’s a typo and should read (SALARY + BONUS).
>
>> Users may optionally specify a data type for a generated column, in
>which case the specified data type must match with the data type of the
>associated expression.
>
>The last part is not 100% in line with the current standard. The types
>don’t need to match, they need to be assignable. I don’t know if this is
>just a simplification for this paper or if SQL:2003 really hat that
>requirement.
>
>Further comments to your statements below…
>
>> On 30 Jan 2020, at 23:06, Keith Medcalf  wrote:
>>
>>
>> On: Wednesday, 29 January, 2020 06:45, Markus Winand
> wrote:
>>
>>> I think there might be a glitch in the way SQLite 3.31.x derives the
>>> collation information from the expression of a generated column.
>>
>>> In particular, COLLATE inside the AS parens seems to be ignored, but
>it
>>> is honoured after the parens:
>>
>> Carrying the COLLATE from an expression into the column definition is
>incorrect.  The definition of a generated column is:
>>
>>  [type affinity] [GENERATED ALWAYS AS ()]
>[COLLATE ] [ ...]
>>
>> so why would the so including a COLLATE as part of the expression
>applies to the expression, and not to the column.  Except for the
>ordering of  which must be first, followed by the type which
>must come second, the other bits can appear in pretty much any ordering
>you heart desires (it is a Dim Sum of clauses).
>>
>> If the  of the column not specified then its "type
>affinity" is BLOB (None).
>
>This is where the SQL standard is different: if there is no type, the
>column gets the type of the expression (ISO/IEC 9075-2, 11.4 SR13c)
>
>> If no COLLATE is specified for the column, then the default collation
>BINARY applies.
>
>Again, the SQL standard is different: If there is no COLLATE clause
>**and** not data type specified, the collation is inherited via the data
>type of the result of the expression.
>
>If the column definition explicitly states a data type, that type comes
>with a collation, which takes precedence of the type of the expression.
>
>> If no NULL constraint is specified then NULL values are permitted.
>
>If not NOT NULL constraint….
>
>> If no DEFAULT expression is specified then the default value is NULL.
>>
>> Why would you think that the COLLATE applied to the evaluation of an
>expression would "carry out" to the column itself?
>
>Because this is what is written in the SQL standard. And it make sense
>when collations are a property of character string types, which they are
>in the SQL standard.
>
>> Especially in light of the fact that the type/collation of the
>expression in a "DEFAULT ()" does not carry out to the column
>definition?
>
>Because you cannot use DEFAULT without specifying a type name at the very
>same time (in standard SQL).
>
>11.4, SR 11) If  is omitted, then either 
>or  shall be specified.
>
>If it would be possible to skip the data type when a DEFAULT clause is
>present, I guess the same rules as for generated columns would apply
>(take the type, including all its properties) of the expressions result.
>
>>
>> The collation se

Re: [sqlite] Default isolation_level for sqlite3.connect?

2020-01-30 Thread Keith Medcalf

At any given instant in time a connection can either (a) have a transaction in 
progress or (b) have no transaction in progress.  An SQL statement cannot be 
executed EXCEPT inside of a transaction.  "autocommit" means that the SQLite3 
database engine (not the sqlite3 wrapper) will start the necessary transaction 
when a statement commences executing and commit (end the transaction) 
automatically when the statement finishes execution.

You may issue one of the various BEGIN or SAVEPOINT commands to the SQLite3 
database engine (not the sqlite3 wrapper) to START a transaction.  If a 
transaction is in process when a statement begins execution then that statement 
will not start a transaction, nor will the statements completion autocommit the 
transaction.  You must do this yourself with the COMMIT / ROLLBACK / RELEASE 
commands.

The isolation_level setting of the sqlite3 wrapper for a connection tells the 
sqlite3 wrapper (not the SQLite3 engine) whether or not it should execute 
commands to BEGIN transactions magically or not, and what type of magical BEGIN 
commands to emit.  Setting the isolation_level to None disables the crystal 
ball in the sqlite3 wrapper so that it *never* issues BEGIN commands of any 
type (by itself) and only executes the commands you tell it to execute when you 
tell it to execute them, placing you in control of the transaction and 
autocommit mode of the SQLite3 library and not the sqlite3 wrapper.

The current version of pysqlite2 (2.8.2 and later) uses the following method to 
determine whether to automagically start a transaction:

1) The isolation_level is not None
2) The statement is not a DDL statement (the definition of DDL being that the 
first token on the line is "create", "drop", or "reindex")
3) The underlying SQLite3 engine connection is in autocommit mode (not in a 
transaction)
4) The underlying SQLite3 engine indicates the statement is not readonly.

If all the above conditions are met then the sqlite3 wrapper issues the 
applicable BEGIN command before executing the requested SQL statement.  This 
means that some DDL statements (eg ALTER) are treated as DML not DDL.

However, the version pysqlite2 (sqlite3) distributed with even the most recent 
3.8.1 version of Python still uses the old (2.6.0 version of pysqlite2) 
unreliable crystal ball which uses the following:

1)  The isolation_level is not None
2)  The statement is a DML statement (first token is "insert", "update", 
"delete", "replace")
3)  The underlying SQLite3 connection is in autocommit mode.

If all the above conditions are met then the sqlite3 wrapper issues the 
applicable BEGIN command before executing the requested SQL statement.

This means that it is possible for some statements (example WITH ... INSERT) to 
not be recognized as performing updates and therefore not starting a 
transaction.

In all cases you can still issue the various BEGIN and SAVEPOINT commands 
manually.

Since you cannot always be sure when the sqlite3/pysqlite2 wrapper will 
automagically start transactions (especially since the magic can change at any 
time from version to version), you are better off setting isolation_level=None 
and doing transaction management yourself.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Thursday, 30 January, 2020 18:14
>To: SQLite mailing list 
>Subject: Re: [sqlite] Default isolation_level for sqlite3.connect?
>
>I still have a hard time to understand what the difference is
>according to the python manual. It keeps saying see somewhere else in
>the python manual. But I don't see where it explains the differences
>between the differences comprehensively and clearly.
>
>https://www.sqlite.org/lang_transaction.html
>
>On the above page, I only see
>
>- BEGIN
>- BEGIN DEFERRED
>- BEGIN IMMEDIATE
>- BEGIN EXCLUSIVE
>
>, where BEGIN is just BEGIN DEFERRED.
>
>But what is isolation_level = None?
>
>https://docs.python.org/3.8/library/sqlite3.html#sqlite3-controlling-
>transactions
>
>"The underlying sqlite3 library operates in autocommit mode by
>default, but the Python sqlite3 module by default does not."
>
>According to the above, since isolation_level="" is the default in
>python, so it is not autocommit and it is BEGIN.
>
>But what is autocommit mode? The following doesn't provide a
>definition. Is there a definition on sqlite.com website?
>
>https://www.sqlite.org/c3ref/get_autocommit.html
>
>> The default is an empty string (ie, '').  It can take the value None,
>'',
>> 'DEFERRED', 'IMMEDIATE', 'EXCLUSIVE' and the value (if not None) is
>appended
>> to the BEGIN when a magical transaction is started, and if None, then
>you
>> are indicating that you will be using manual transaction control
>>
>> isolation_level   command
>> None  None
>> '' (default)  BEGIN
>> 'DEFERRED'BEGIN DEFERRED
>> 'IMMEDIATE'   

Re: [sqlite] Generated columns and COLLATE in the AS parens

2020-01-30 Thread Keith Medcalf

On: Wednesday, 29 January, 2020 06:45, Markus Winand  
wrote:

>I think there might be a glitch in the way SQLite 3.31.x derives the
>collation information from the expression of a generated column.

>In particular, COLLATE inside the AS parens seems to be ignored, but it
>is honoured after the parens:

Carrying the COLLATE from an expression into the column definition is 
incorrect.  The definition of a generated column is:

 [type affinity] [GENERATED ALWAYS AS ()] [COLLATE 
] [ ...]

so why would the so including a COLLATE as part of the expression applies to 
the expression, and not to the column.  Except for the ordering of  
which must be first, followed by the type which must come second, the other 
bits can appear in pretty much any ordering you heart desires (it is a Dim Sum 
of clauses).  

If the  of the column not specified then its "type affinity" is 
BLOB (None).
If no COLLATE is specified for the column, then the default collation BINARY 
applies.
If no NULL constraint is specified then NULL values are permitted.
If no DEFAULT expression is specified then the default value is NULL.

Why would you think that the COLLATE applied to the evaluation of an expression 
would "carry out" to the column itself?  Especially in light of the fact that 
the type/collation of the expression in a "DEFAULT ()" does not 
carry out to the column definition?

The collation sequence applies to "comparison" operations of TEXT values, and 
not to anything else (it is ignored for all other value types).  It is an 
attribute of a column (like column affinity) and not an attribute of a value.

In other words,

x AS (y == 'yahoo' collate nocase)

applies the affinity nocase to the evaluation of the == comparison operation.  
The result of evaluation of the expression is a data value with a concrete type 
(integer) and no associated collation.  This value then has whatever column 
affinity is specified applied to it and inherits the collation of the 
containing column.

x AS (y collate nocase)

is the same thing as

x AS (y)

since there is no "comparison" operation in the expression to which the 
collation can be applied (it is a useless attribute to the expression).  The 
result of the expression is a value of the concrete type of y with no 
associated collation.  This value then has the whatever column affinity is 
specified applied to it and inherits the collation of the containing column.

ie, "values" do not have a collation associated with them, column definitions 
and comparison operators on text have collation sequences associated with them.

>SQLite version 3.31.1 2020-01-27 19:55:54
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> CREATE TABLE tmp (str VARCHAR(255), str_nc1 GENERATED ALWAYS AS
>(str COLLATE NOCASE), str_nc2 GENERATED ALWAYS AS (str) COLLATE NOCASE);
>sqlite> INSERT INTO tmp VALUES ('a'), ('A'), ('b'), ('B');
>sqlite> SELECT * FROM tmp ORDER BY str;
>A|A|A
>B|B|B
>a|a|a
>b|b|b
>sqlite> SELECT * FROM tmp ORDER BY str_nc1;
>A|A|A
>B|B|B
>a|a|a
>b|b|b
>sqlite> SELECT * FROM tmp ORDER BY str_nc2;
>a|a|a
>A|A|A
>b|b|b
>B|B|B
>sqlite> SELECT * FROM tmp ORDER BY (str COLLATE NOCASE);
>a|a|a
>A|A|A
>b|b|b
>B|B|B
>sqlite>

>I believe the "ORDER BY str_nc1” should yield the same row order as the
>last two queries.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Standard SQL" ?

2020-01-30 Thread Keith Medcalf

On Thursday, 30 January, 2020 12:20, Simon Slavin  wrote:

>I would appreciate your help.  Reading a technical article today, I came
>across a casual reference to "Standard SQL" as if it was a well-known
>thing.  This worried me since I've never heard the term and I'm meant to
>know about such things.

I would say that "Standard SQL" is that variant of SQL is that "minimal common 
subset" dialect of SQL that is understood by every single implementation 
claiming to be "SQL compliant" that has ever existed until the present.  In 
other words, it is the dialect of SQL that will be understood by *any* software 
claiming to be "SQL compliant".

>It doesn't seem to refer to the official standard for SQL, which is huge
>and contains a plethora of features implemented once or never.  The
>author seemed to think it was a sort of 'core SQL' – features identically
>implemented by all, or most, of the well-known SQL engines.

There is a core SQL.  It is not defined.  Most SQL (particularly DDL) is 
"implementation specific".  About the only things that comprise "standard SQL" 
are the DML constructs:

INSERT INTO  () VALUES ();
SELECT  FROM  [WHERE ] [GROUP BY  [HAVING ]] [ORDER BY ]
DELETE FROM  [WHERE ]
UPDATE  SET  = [,  = ]... [WHERE 
]

>The one possibility I can think of is SQL:1999.  This is the first
>version which has features marked as 'mandatory' or 'optional'.  A full
>implementation of all mandatory features could, I suppose, be called
>"Standard SQL", but I've never heard of that term being used for that.

This is very Johnny-lately.  I think the first standard was SQL-85 but even 
that was chock full of vendor implementation specifics and light on being 
"standard".  Subsequent versions of the SQL Standard committee output simply 
became more a practice in documenting vendor implementation specifics rather 
than defining a standard.  Most standards suffer from this problem.

>Have any of you been using this term for a meaning other than "Fully
>conforming to SQL:2019 (or whatever version you think current) ?  Do you
>have documentation somewhere ?  Or are my suspicions correct and there's
>no such thing ?

Yes. "standard SQL" is that subset of SQL that is understood by every 
implemetation ever without making changes.  It is a very small subset of the 
language.  Any "standard SQL" statement will operate the same way on every "SQL 
compliant" bit of software that ever existed or will exist in the future.

No.  It comes from experience using more than one SQL language implementation.

No.  THere is most assuredly a "standard SQL" that is understood by every 
implemetation of SQL that ever existed or even will exist.  That it is not 
documented is entirely beside the point.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Default isolation_level for sqlite3.connect?

2020-01-30 Thread Keith Medcalf

The default is an empty string (ie, '').  It can take the value None, '', 
'DEFERRED', 'IMMEDIATE', 'EXCLUSIVE' and the value (if not None) is appended to 
the BEGIN when a magical transaction is started, and if None, then you are 
indicating that you will be using manual transaction control

isolation_level   command
None  None
'' (default)  BEGIN
'DEFERRED'BEGIN DEFERRED
'IMMEDIATE'   BEGIN IMMEDIATE
'EXCLUSIVE'   BEGIN EXCLUSIVE

You can set the isolation_level in the connect call (the default is '') or 
read/change it with the .isolation_level property of a connection object.

>>> import sqlite3
>>> db = sqlite3.connect(':memory:')
>>> db.isolation_level
''
>>> db.isolation_level = None
>>> db.isolation_level
>>>
>>> db = sqlite3.connect(':memory:', isolation_level=None)
>>> db.isolation_level
>>>

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Thursday, 30 January, 2020 09:55
>To: SQLite mailing list 
>Subject: Re: [sqlite] Default isolation_level for sqlite3.connect?
>
>On 1/30/20, Keith Medcalf  wrote:
>>
>> The isolation_level specifies the default suffix to put after 'BEGIN'
>when
>> beginning a transaction.  Inside the library the following is used when
>the
>> magic wants to start a transaction:
>>
>> if isolation_level is not None:
>>.execute('BEGIN %s' % isolation_level)
>>
>> This is so that you can set isolation_level to 'IMMEDIATE' so that all
>> magically initiated transactions are BEGIN IMMEDIATE rather than the
>default
>> BEGIN DEFERRED
>
>Are you saying the default isolation_level is equivalent to IMMEDIATE?
>Or you are not answering my specific question of what is the default
>of isolation_level? I think the default paremeter value of
>isolation_level of the function sqlite3.connect() is just None. It is
>not the case?
>
>--
>Regards,
>Peng
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Default isolation_level for sqlite3.connect?

2020-01-30 Thread Keith Medcalf

The isolation_level specifies the default suffix to put after 'BEGIN' when 
beginning a transaction.  Inside the library the following is used when the 
magic wants to start a transaction:

if isolation_level is not None:
   .execute('BEGIN %s' % isolation_level)

This is so that you can set isolation_level to 'IMMEDIATE' so that all 
magically initiated transactions are BEGIN IMMEDIATE rather than the default 
BEGIN DEFERRED

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Thursday, 30 January, 2020 08:59
>To: SQLite mailing list 
>Subject: [sqlite] Default isolation_level for sqlite3.connect?
>
>Hi,
>
>I don't see what is the default isolation_level here. Is it None? Thanks.
>
>https://docs.python.org/3/library/sqlite3.html#module-functions-and-
>constants
>sqlite3.connect(database[, timeout, detect_types, isolation_level,
>check_same_thread, factory, cached_statements, uri])¶
>
>--
>Regards,
>Peng
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it necessary to encode() for file names in sqlar format?

2020-01-30 Thread Keith Medcalf

Yes.  If it is bytes type then the data stored by the database will be a BLOB, 
not TEXT ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Thursday, 30 January, 2020 02:24
>To: SQLite mailing list 
>Subject: Re: [sqlite] Is it necessary to encode() for file names in sqlar
>format?
>
>So to confirm. In python 3, the str type should be used for name? Thanks.
>
>On Thu, Jan 30, 2020 at 12:58 AM Keith Medcalf 
>wrote:
>
>>
>> sys.argv is a list of unicode text strings.  There is no need to
>> specifically encode or decode it so long as sys.getdefaultencoding()
>> returns 'utf-8'.  If your version of Python is so old that it returns
>> something else then you need to modify site.py and have it set the
>default
>> encoding to 'utf-8' otherwise you may end up with MBCS or some other
>> invalid text encoding in your database text fields.
>>
>> --
>> The fact that there's a Highway to Hell but only a Stairway to Heaven
>says
>> a lot about anticipated traffic volume.
>>
>> >-Original Message-
>> >From: sqlite-users  On
>> >Behalf Of Peng Yu
>> >Sent: Wednesday, 29 January, 2020 22:57
>> >To: SQLite mailing list 
>> >Subject: [sqlite] Is it necessary to encode() for file names in sqlar
>> >format?
>> >
>> >I use the following python3 code to create sqlar file. Is it necessary
>> >to sys.argv[2].encode('utf-8') in the line of execute()? In other
>> >word, does the native sqlar tools inteprete the name column as an
>> >encoded value or a non-encode value? Thanks.
>> >
>> >import sqlite3
>> >conn=sqlite3.connect(sys.argv[1])
>> >c=conn.cursor()
>> >c.execute('''
>> >CREATE TABLE IF NOT EXISTS sqlar(
>> >name TEXT PRIMARY KEY
>> >, mode INT
>> >, mtime INT
>> >, sz INT
>> >, data BLOB)
>> >''')
>> >
>> >import zlib
>> >data = sys.stdin.buffer.read()
>> >c.execute('REPLACE INTO sqlar VALUES(?, ?, ?, ?, ?)', [sys.argv[2], 0,
>> >0, len(data), zlib.compress(data)])
>> >conn.commit()
>> >
>> >--
>> >Regards,
>> >Peng
>> >___
>> >sqlite-users mailing list
>> >sqlite-users@mailinglists.sqlite.org
>> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>--
>Regards,
>Peng
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it necessary to encode() for file names in sqlar format?

2020-01-29 Thread Keith Medcalf

sys.argv is a list of unicode text strings.  There is no need to specifically 
encode or decode it so long as sys.getdefaultencoding() returns 'utf-8'.  If 
your version of Python is so old that it returns something else then you need 
to modify site.py and have it set the default encoding to 'utf-8' otherwise you 
may end up with MBCS or some other invalid text encoding in your database text 
fields.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Peng Yu
>Sent: Wednesday, 29 January, 2020 22:57
>To: SQLite mailing list 
>Subject: [sqlite] Is it necessary to encode() for file names in sqlar
>format?
>
>I use the following python3 code to create sqlar file. Is it necessary
>to sys.argv[2].encode('utf-8') in the line of execute()? In other
>word, does the native sqlar tools inteprete the name column as an
>encoded value or a non-encode value? Thanks.
>
>import sqlite3
>conn=sqlite3.connect(sys.argv[1])
>c=conn.cursor()
>c.execute('''
>CREATE TABLE IF NOT EXISTS sqlar(
>name TEXT PRIMARY KEY
>, mode INT
>, mtime INT
>, sz INT
>, data BLOB)
>''')
>
>import zlib
>data = sys.stdin.buffer.read()
>c.execute('REPLACE INTO sqlar VALUES(?, ?, ?, ?, ?)', [sys.argv[2], 0,
>0, len(data), zlib.compress(data)])
>conn.commit()
>
>--
>Regards,
>Peng
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Does .commit() ensure the .execute()'s and .executemany()'s called before are run atomically?

2020-01-29 Thread Keith Medcalf

On Wednesday, 29 January, 2020 22:45, Peng Yu  wrote:

>In python sqlite3 program, if I call .execute() multiple times then
>call .commit(). Does it ensure that all the sqlite3 commands specified
>by execute()'s either all take effect or none effect?

Mayhaps yes, mayhaps no.  .commit() is merely syntactic sugar for 
.execute('COMMIT')

Basically, the method does the following:

if (sqlite3_get_autocommit(connection) == 0) {
   sqlite3_stmt* stmt = 0;
   sqlite3_prepare(connection, "COMMIT", -1, , NULL);
   if (stmt) {
  sqlite3_step(stmt);
  sqlite3_finalize(stmt);
   }
}

That is, if there is a transaction in progress on the connection, it does a 
commit.  Mutatis mutandis for .rollback() doing a 'ROLLBACK' rather than commit.

It makes sure that all changes made in the current transaction are committed to 
the database.  If you are using "magical mode" then hopefully a transaction was 
started when you updated something however this is not guaranteed (such is the 
nature of magic).  The "magical mode" of the sqlite3 wrapper decides when to 
BEGIN and COMMIT transactions on its own.  Sometimes the magician is a little 
daffy though and gets it wrong so it may begin/commit/rollback whenever the 
magician feels like it, which may or may not be when you want.  

If you want to ensure that transactions BEGIN and COMMIT/ROLLBACK when *you* 
want them to then use explicit .execute('BEGIN') (or .execute('BEGIN 
IMMEDIATE') to start an immediate transaction) command to start a transaction 
on a connection opened with isolation_level=None.  You can still use .commit() 
for .execute('COMMIT') and .rollback() for .execute('ROLLBAC') if you want.  
The connection has a property in_transaction that lets you test whether a 
transaction is in progress (True) or the database is in autocommit (False) mode.

>In other words, if any error occurs while running the sqlite3 commands
>specified in execute(), what happens?

That depends on the nature of the error.  If you put in a bad SQL statement 
(got a syntax error) then that statement did nothing.  Otherwise it depends on 
the conflict resolution method in effect for the statement causing the error, 
which is usually ABORT (by default) and it just ABORTs the current statement.

>Also, is .executemany() the same as calling .execute() many times?

Yes.  x.executemany(sql, stuff) is syntactic sugar for

for e in stuff:
x.execute(sql, e):

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] single table data collapse with constraints

2020-01-29 Thread Keith Medcalf

Excuse the top posting.  This perhaps:

create table srcdata
(
CLS1text not null,
CLS2integer not null,
START   integer not null,
END integer not null
);
insert into srcdata values ('ABC1',100,0,1);
insert into srcdata values ('ABC1',100,1,1);
insert into srcdata values ('ABC1',100,1,3);
insert into srcdata values ('ABC1',100,1,3);
insert into srcdata values ('ABC1',100,3,4);
insert into srcdata values ('ABC1',100,4,3);
insert into srcdata values ('ABC1',100,5,6);
insert into srcdata values ('ABC1',100,6,20);
insert into srcdata values ('ABC1',100,6,20);
insert into srcdata values ('ABC1',500,4,19);
insert into srcdata values ('ABC1',500,19,4);
insert into srcdata values ('ABC2',300,4,4);

sqlite> select * from srcdata;
CLS1CLS2START   END
--  --  --  --
ABC1100 0   1
ABC1100 1   1
ABC1100 1   3
ABC1100 1   3
ABC1100 3   4
ABC1100 4   3
ABC1100 5   6
ABC1100 6   20
ABC1100 6   20
ABC1500 4   19
ABC1500 19  4
ABC2300 4   4

with src (cls1, cls2, start, end)
  as (
  select distinct cls1, cls2, min(start, end), max(start, end)
from srcdata
 ),
 c (cls1, cls2, start, end)
  as (
 select cls1, cls2, start, end
   from src
  union
 select c.cls1, c.cls2, s.start, c.end
   from src as s, c
  where c.cls1 == s.cls1
and c.cls2 == s.cls2
and c.start == s.end
and s.start != c.start
 ),
 u (cls1, cls2, start, end)
  as (
  select cls1, cls2, start, end
from c as o
   where not exists (
 select *
   from c
  where cls1 == o.cls1
and cls2 == o.cls2
and (   (
 o.start between start + 1 and end
 and o.start between start and end - 1
)
 or (
 o.end between start + 1 and end
 and o.end between start and end - 1
)
)
)
 )
select *
  from u
order by cls1, cls2, start, end
;

cls1cls2start   end
--  --  --  --
ABC1100 0   4
ABC1100 5   20
ABC1500 4   19
ABC2300 4   4

So in the CTE 
 "src" makes sure that start < end and removes duplicate rows
 "c" finds and adds coalesced range rows
 "u" finds the rows in c where there is no containing row

If you have any significant amount of data the following procedure is probably 
many many times faster since the various bits of the single CTE version are not 
materialized (though if you can figure out how to force that, it will do the 
same thing):

drop table if exists temp.src;
drop table if exists temp.coal;

create temporary table src
(
cls1text not null,
cls2text not null,
start   integer not null,
end integer not null
);
insert into temp.src
select distinct cls1, cls2, min(start, end) as start, max(start, end) as end
  from srcdata
;
create index temp.idx_src on src (cls1, cls2, end);

create temporary table coal
(
cls1text not null,
cls2text not null,
start   integer not null,
end integer not null
);
insert into temp.coal
with c
  as (
 select cls1, cls2, start, end
   from temp.src
  union
 select c.cls1, c.cls2, s.start, c.end
   from temp.src as s, c
  where c.cls1 == s.cls1
and c.cls2 == s.cls2
and c.start == s.end
and s.start != c.start
 )
select cls1, cls2, start, end
  from c
;
create index temp.idx_coal on coal (cls1, cls2);

  select cls1, cls2, start, end
from temp.coal as o
   where not exists (
 select 1
   from temp.coal as c
  where cls1 == o.cls1
and cls2 == o.cls2
and (   (
 o.start between start + 1 and end
 and o.start between start and end - 1
)
 or (
 o.end between start + 1 and end
 and o.end between start and end - 1
)
)

Re: [sqlite] Is mutliple-execute-one-commit slower than multiple single-execute-single-commit?

2020-01-29 Thread Keith Medcalf

On Tuesday, 28 January, 2020 23:42, Peng Yu  wrote:

>I have two python programs using sqlite3. They function the same,
>except the following.

I presume this means you are using the standard (as in included with the 
standard Python distribution) sqlite3 module?  There are other ways to use 
SQLite3 (the database) from Python, the sqlite3 (pysqlite2) wrapper being only 
one of them.  What are the statements being executed?  Are you using "magical" 
mode for the wrapper (what is the isolation_level set to on the sqlite3.connect 
call).

>In the first, execute() is called in batches and then commit() is
>called following them. In the second, commit() is called after each
>execute(). It seems that the second case is faster (I can not separate
>my code in a self-contained test case to show here).

>This is counterintuitive. I thought the first should be faster.

>Is it expected that the 2nd case should be slightly faster?

The first case (batching multiple inserts in a single transaction) should be 
faster.  On the below test disk I/O rate exceeded 100 MB/s for wal mode.

-//- insertspeed.py -//-
import sqlite3
import sys
import time

db = sqlite3.connect('test.db')

db.executescript('create table if not exists x(x)')

records = 100

print(sys.version)
print('sqlite3 wrapper version', sqlite3.version, 'using library version', 
sqlite3.sqlite_version)
print()
for mode in ['wal', 'delete']:
print('Database mode', mode)
print('  ')
print(' RecordsBatch  Seconds')
print('  ')
for batchsize in [100, 10, 1, 1000, 100, 10, 1]:
db.executescript('pragma journal_mode=delete')
db.executescript('delete from x')
db.executescript('vacuum')
db.execute('pragma journal_mode=%s' % mode)
t = time.time()
for i in range(records):
db.execute('insert into x values (?)', (i,))
if i % batchsize == 0:
db.commit()
db.commit()
print('%8d %8d %8.3f' % (records, batchsize, time.time() - t))
print('  ')

-//--//-

3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)]
sqlite3 wrapper version 2.6.0 using library version 3.31.1

Database mode wal
  
 RecordsBatch  Seconds
  
 100  1002.255
 100   102.348
 10012.344
 100 10002.453
 100  1003.355
 100   108.856
 1001   64.167
  
Database mode delete
  
 RecordsBatch  Seconds
  
 100  1002.215
 100   102.460
 10013.320
 100 1000   12.099
 100  100   96.896

I couldn't wait for the smaller batches to complete in journal_mode delete.  I 
would expect them to take a looong time ...

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] importing data to a table that has generated-columns

2020-01-27 Thread Keith Medcalf

On Monday, 27 January, 2020 10:31, James K. Lowden  
wrote:

>On Sun, 26 Jan 2020 12:01:32 -0700
>"Keith Medcalf"  wrote:

>> Now that the table exists, use "SELECT * FROM " to determine
>> the number of columns in the table (which will include computed
>> always columns, if any).
>...
>> Otherwise, Richard *may* make some changes to the .import logic which
>> *might* appear in the next release version.

>I imagine it's already being considered: if pragma table_info included
>a column with the SQL for generated columns (and NULL) otherwise, the
>shell could use that instead of SELECT * to determine the number of
>insertable columns.

That data is already in the schema structures as is the assigned affinity:

SQLite version 3.31.0 2020-01-27 17:01:49
Enter ".help" for usage hints.
sqlite> .schema details
CREATE TABLE Details
(
ApplianceID integer not null references Appliances(ApplianceID) on 
delete cascade,
SrcIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
DstIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
Transport   text not null collate nocase,
SrcHostID   integer not null references Hosts(HostID) on delete cascade,
SrcPort integer not null,
DstHostID   integer not null references Hosts(HostID) on delete cascade,
DstPort integer not null,
Action  integer not null,
Count   integer not null,
FileID  integer not null references Files(FileID) on delete cascade
);
sqlite> .mode col
sqlite> .head on
sqlite> pragma table_xinfo(details);
cid nametypeaff collnotnull 
dflt_value  pk  rowid   autoinc hidden
--  --  --  --  --  --  
--  --  --  --  --
-1  INTEGER 0   
1   1   0   1
0   ApplianceI  integer INTEGER 1   
0   0   0   0
1   SrcIntIDinteger INTEGER 1   
0   0   0   0
2   DstIntIDinteger INTEGER 1   
0   0   0   0
3   Transport   textTEXTnocase  1   
0   0   0   0
4   SrcHostID   integer INTEGER 1   
0   0   0   0
5   SrcPort integer INTEGER 1   
0   0   0   0
6   DstHostID   integer INTEGER 1   
0   0   0   0
7   DstPort integer INTEGER 1   
0   0   0   0
8   Action  integer INTEGER 1   
0   0   0   0
9   Count   integer INTEGER 1   
0   0   0   0
10  FileID  integer INTEGER 1   
0   0   0   0
sqlite> create table x(id integer primary key, a datetime not null, c as (id + 
1), d as (id + 2) stored, e as (id + 3));
sqlite> pragma table_xinfo(x);
cid nametypeaff collnotnull 
dflt_value  pk  rowid   autoinc hidden
--  --  --  --  --  --  
--  --  --  --  --
0   id  integer INTEGER 0   
1   1   0   0
1   a   datetimeNUMERIC 1   
0   0   0   0
2   c   BLOB0   
0   0   0   2
3   d   BLOB0   
0   0   0   3
4   e   BLOB0   
0   0   0   2

This is my "modified" table_info pragma that returns somewhat more information 
for each column (all of which comes from the in the in-memory schema).  The 
"hidden" value is 0 for visible columns, 1 for "hidden" columns, 2 for computed 
virtual columns and 3 for computed stored columns.  I also added a 
database_info pragma that returns a list of objects in all attached schema's so 
that it is easier to define useful info

Re: [sqlite] [EXTERNAL] Re: Row length in SQLITE

2020-01-27 Thread Keith Medcalf

Except that should be for the header bytes only.  It is somewhat inaccurate 
because IEEE doubles may be stored as varints and values 0 and 1 may be stored 
as just the header code 8 or 9 without storing the actual varint (if the schema 
version is 4 or more, which cannot be read in an extension, though I suppose 
the context points to the connection which points to the schema which somewhere 
along the way will have the schema version, though those pointers are supposed 
to be opaque).  Plus of course that size is the size of the header + the size 
of the varint storing the size of the header all as a varint ...

However, assuming schema type 4 then the following is more accurate.  Though if 
the column has no affinity then not all the optimizations are applied.

SQLITE_PRIVATE void _recSizeFunc(sqlite3_context *context, int argc, 
sqlite3_value **argv)
{
sqlite_int64 hdrsize = 0;
sqlite_int64 datsize = 0;
sqlite_int64 sz;
double v;
int i;

for (i=0; i 1))
datsize += _varIntSize_(sqlite3_value_int64(argv[i]));
break;
case SQLITE_FLOAT:
hdrsize += 1;
v = sqlite3_value_double(argv[i]);
if ((fabs(v) <= 140737488355327.0) && (trunc(v) == v))
{
if ((v < 0) || (v > 1))
datsize += _varIntSize_((sqlite_int64)v);
}
else
datsize += 8;
break;
case SQLITE_TEXT:
sqlite3_value_blob(argv[i]);
sz = sqlite3_value_bytes(argv[i]);
hdrsize += _varIntSize_(sz*2+12);
datsize += sz;
break;
case SQLITE_BLOB:
sz = sqlite3_value_bytes(argv[i]);
hdrsize += _varIntSize_(sz*2+13);
datsize += sz;
break;
}
}
sqlite3_result_int64(context, _varIntSize_(hdrsize +_varIntSize_(hdrsize)) 
+ hdrsize + datsize);
}


-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Hick Gunter
>Sent: Monday, 27 January, 2020 05:32
>To: SQLite mailing list 
>Subject: Re: [sqlite] [EXTERNAL] Re: Row length in SQLITE
>
>You are missing
>
>maxsize += _varIntSize_(maxsize)
>
>fort he size varint at the begin oft he header just before the return
>
>-Ursprüngliche Nachricht-
>Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
>Im Auftrag von Keith Medcalf
>Gesendet: Montag, 27. Januar 2020 12:43
>An: SQLite mailing list 
>Betreff: [EXTERNAL] Re: [sqlite] Row length in SQLITE
>
>
>Here is a wee bit of C code that you can compile as a plugin that will
>give you the row size (well, it may be bigger than the actual record size
>by a few bytes but it is pretty close) ...
>
>works properly for utf-16 encoded databases as well.
>
>-//- sqlsize.c -//-
>#include "sqlite3ext.h"
>SQLITE_EXTENSION_INIT1
>
>#ifndef SQLITE_PRIVATE
>#define SQLITE_PRIVATE static
>#endif
>
>static inline sqlite_int64 _varIntSize_(sqlite_int64 v) {
>sqlite_int64 uu;
>
>if (v<0)
>uu = ~v;
>else
>uu = v;
>if (uu <= 127 )
>return 1;
>else if (uu <= 32767)
>return 2;
>else if (uu <= 8388607)
>return 3;
>else if (uu <= 2147483647)
>return 4;
>else if (uu <= 140737488355327LL)
>return 6;
>else return 8;
>}
>
>SQLITE_PRIVATE void _recSizeFunc(sqlite3_context *context, int argc,
>sqlite3_value **argv) {
>sqlite_int64 maxsize = 0;
>sqlite_int64 sz;
>int i;
>
>for (i=0; i{
>switch (sqlite3_value_type(argv[i]))
>{
>case SQLITE_NULL:
>maxsize += 1;
>break;
>case SQLITE_INTEGER:
>maxsize += _varIntSize_(sqlite3_value_int64(argv[i])) +
>1;
>break;
>case SQLITE_FLOAT:
>maxsize += 9;
>break;
>case SQLITE_TEXT:
>sqlite3_value_blob(argv[i]);
>sz = sqlite3_value_bytes(argv[i]);
>maxsize += sz + _varIntSize_(sz*2+12);
>break;
>case SQLITE_BLOB:
>sz = sqlite3_value_bytes(argv[i]);
>maxsize += sz + _varIntSize_(sz*2+13);
>break;
>}
>}
>sqlite3_result_int64(context, maxsize); }
>
>#ifdef _WIN32
>#ifndef SQLITE_CORE
>__declspec(dllexport)
>#endif
>#endif
>int sqlite3_sqlsize_init(sqlite3 *db, char

Re: [sqlite] Row length in SQLITE

2020-01-27 Thread Keith Medcalf

Here is a wee bit of C code that you can compile as a plugin that will give you 
the row size (well, it may be bigger than the actual record size by a few bytes 
but it is pretty close) ... 

works properly for utf-16 encoded databases as well.

-//- sqlsize.c -//-
#include "sqlite3ext.h"
SQLITE_EXTENSION_INIT1

#ifndef SQLITE_PRIVATE
#define SQLITE_PRIVATE static
#endif

static inline sqlite_int64 _varIntSize_(sqlite_int64 v)
{
sqlite_int64 uu;

if (v<0)
uu = ~v;
else
uu = v;
if (uu <= 127 )
return 1;
else if (uu <= 32767)
return 2;
else if (uu <= 8388607)
return 3;
else if (uu <= 2147483647)
return 4;
else if (uu <= 140737488355327LL)
return 6;
else return 8;
}

SQLITE_PRIVATE void _recSizeFunc(sqlite3_context *context, int argc, 
sqlite3_value **argv)
{
sqlite_int64 maxsize = 0;
sqlite_int64 sz;
int i;

for (i=0; isqlite3 \data\apps\splunk\splunk.db
SQLite version 3.31.0 2020-01-27 11:17:22
Enter ".help" for usage hints.
sqlite> .schema details
CREATE TABLE Details
(
ApplianceID integer not null references Appliances(ApplianceID) on 
delete cascade,
SrcIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
DstIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
Transport   text not null collate nocase,
SrcHostID   integer not null references Hosts(HostID) on delete cascade,
SrcPort integer not null,
DstHostID   integer not null references Hosts(HostID) on delete cascade,
DstPort integer not null,
Action  integer not null,
Count   integer not null,
FileID  integer not null references Files(FileID) on delete cascade
);
sqlite> select _rowid_, recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID) from details 
limit 10;
1|27
2|27
3|27
4|27
5|27
6|28
7|27
8|27
9|28
10|27
sqlite> select max(recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID)) from details;
31
sqlite> select avg(recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID)) from details;
29.4473800800817
sqlite> select min(recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID)) from details;
26
sqlite> select skew(recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID)) from details;
-0.378384651017371
sqlite> select kurt(recsize(ApplianceID, SrcIntID, DstIntID, Transport, 
SrcHostID, SrcPort, DstHostID, DstPort, Action, Count, FileID)) from details;
0.130516904446944

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Deon Brewis
>Sent: Friday, 24 January, 2020 21:14
>To: SQLite mailing list 
>Subject: Re: [sqlite] Row length in SQLITE
>
>No I mean e.g.
>
>row 1 = 500 bytes,
>row 2 = 600 bytes
>row 3 = 80 bytes
>row 4 = 300 bytes
>
>etc.
>
>Like the info that DBSTAT gives, but per row, not per page. This doesn't
>need to be performant - it's for usage analysis during development time.
>
>- Deon
>
>-Original Message-
>From: sqlite-users  On
>Behalf Of Simon Slavin
>Sent: Wednesday, January 22, 2020 5:24 PM
>To: SQLite mailing list 
>Subject: Re: [sqlite] Row length in SQLITE
>
>On 22 Jan 2020, at 11:44pm, Deon Brewis  wrote:
>
>> Is there any way to get the length of rows in a table / index in
>sqlite?
>
>Do you mean the count of rows in a table / index ?
>
>   SELECT count(*) FROM MyTable
>
>There's no easy fast way to do this because SQLite doesn't keep that
>number handy anywhere.  It stores the entries in a tree and it would have
>to manually count the leaves of the tree.
>
>
>
>Or do you mean the count of columns in a table / index ?
>
>SELECT * FROM MyTable LIMIT 1
>
>and count the number of columns returned.  Or in C
>
>sqlite3_column_count()
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row length in SQLITE

2020-01-27 Thread Keith Medcalf

You can certainly get the max and average cell size per page of rows from 
dbstat which is the most granular data available I think, as well as the 
average and max for all the rows taken together.  Assuming that the table is a 
"rowid" table, then that is the data for the "leaf" pages only.  As in:

   select Schema,
  Name,
  PageNo,
  ncell as nrows,
  payload / ncell as avglen,
  mx_payload as maxlen,
  payload as payload,
  unused as unused,
  pgsize as total
 from dbStat
where PageType == 'leaf'
  and Schema == 'main'
  and Name == 'Details'
union all
   select Schema,
  Name,
  'All',
  sum(ncell) as nrows,
  sum(payload) / sum(ncell) as avglen,
  max(mx_payload) as maxlen,
  sum(payload) as payload,
  sum(unused) as unused,
  sum(pgsize) as total
 from dbStat
where PageType == 'leaf'
  and Schema == 'main'
  and Name == 'Details'
;

The row size will not vary by alot unless the table contains 'text' or 'blob' 
column data.  The schema and summary for the above table looks like this:

CREATE TABLE Details
(
ApplianceID integer not null references Appliances(ApplianceID) on 
delete cascade,
SrcIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
DstIntIDinteger not null references Interfaces(InterfaceID) on 
delete cascade,
Transport   text not null collate nocase,
SrcHostID   integer not null references Hosts(HostID) on delete cascade,
SrcPort integer not null,
DstHostID   integer not null references Hosts(HostID) on delete cascade,
DstPort integer not null,
Action  integer not null,
Count   integer not null,
FileID  integer not null references Files(FileID) on delete cascade
);

schema  name'All'   nrows   avglen  maxlen  payload 
unused  total
--  --  --  --  --  --  
--  --  --
mainDetails All 1160835127  31  
321835482   1851729 404230144

which indicates that all the rows are about the same size and the average is 27 
bytes, which is somewhat less than the 95 bytes max I would expect per row if 
the integers were all stored as 8 byte integers.

Are you trying to find rows that have "wacky large" data (as in blob/text) 
stored in them?

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Deon Brewis
>Sent: Friday, 24 January, 2020 21:14
>To: SQLite mailing list 
>Subject: Re: [sqlite] Row length in SQLITE
>
>No I mean e.g.
>
>row 1 = 500 bytes,
>row 2 = 600 bytes
>row 3 = 80 bytes
>row 4 = 300 bytes
>
>etc.
>
>Like the info that DBSTAT gives, but per row, not per page. This doesn't
>need to be performant - it's for usage analysis during development time.
>
>- Deon
>
>-Original Message-
>From: sqlite-users  On
>Behalf Of Simon Slavin
>Sent: Wednesday, January 22, 2020 5:24 PM
>To: SQLite mailing list 
>Subject: Re: [sqlite] Row length in SQLITE
>
>On 22 Jan 2020, at 11:44pm, Deon Brewis  wrote:
>
>> Is there any way to get the length of rows in a table / index in
>sqlite?
>
>Do you mean the count of rows in a table / index ?
>
>   SELECT count(*) FROM MyTable
>
>There's no easy fast way to do this because SQLite doesn't keep that
>number handy anywhere.  It stores the entries in a tree and it would have
>to manually count the leaves of the tree.
>
>
>
>Or do you mean the count of columns in a table / index ?
>
>SELECT * FROM MyTable LIMIT 1
>
>and count the number of columns returned.  Or in C
>
>sqlite3_column_count()
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL CASE WHEN THEN ELSE END

2020-01-27 Thread Keith Medcalf

This version generates the most efficient query plan in 3.31.0 when you have 
indexes on the necessary columns:

CREATE INDEX t0_1 on t0 (a, idate, c); -- c does not have to be in the index
CREATE INDEX t1_1 on t1 (f, idate);
CREATE INDEX t2_1 on t2 (pid, wyear); -- this could be a without rowid table 
with both columns in the primary key

with keys (pid, idate0, idate1)
  as (
  select distinct pid,
 (
  select max(idate)
from t0
   where a == pid
 ),
 (
  select max(idate)
from t1
   where f == pid
 )
from (
  select distinct pid
from t2
 )
 )
  SELECT a.a,
 a.c,
 a.e,
 b.g,
 b.h,
 b.i,
 coalesce((
   SELECT 'YES'
 FROM t2
WHERE wYear == a.c
  AND pid == a.a
  ),  'NO') AS digital
FROM t0 as a, t1 as b, keys
   WHERE a.a == keys.pid
 AND b.f == keys.pid
 AND a.idate == keys.idate0
 AND b.idate == keys.idate1
 AND a.c == 2020
;

without help the query planner does not seem to generate a very good plan but 
maybe that is because the sample data is so small ... or maybe it does and I 
cannot tell with such small data ... but this forces the query to execute in 
the manner I think it should.  If you take the "distinct" from the keys select 
it frees up the query planner to perhaps find a better plan -- you need the 
"select distinct pid from t2" to prevent duplicate rows.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Monday, 27 January, 2020 00:28
>To: SQLite mailing list 
>Subject: Re: [sqlite] SQL CASE WHEN THEN ELSE END
>
>
>Do you perhaps mean:
>
>  SELECT a.a,
> a.c,
> a.e,
> b.g,
> b.h,
> b.i,
> coalesce((
>   SELECT 'YES'
> FROM t2
>WHERE wYear == a.c
>  AND pid == a.a
>  ),  'NO') AS digital
>FROM t0 as a, t1 as b
>   WHERE a.a == b.f
> AND a.idate == (SELECT MAX(idate) from t0 where a = a.a)
> AND b.idate == (SELECT MAX(idate) from t1 where f = a.a)
> AND a.a IN (SELECT pid FROM t2)
> AND a.c == 2020
>;
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of Jose Isaias Cabrera
>>Sent: Sunday, 26 January, 2020 19:44
>>To: SQLite mailing list 
>>Subject: [sqlite] SQL CASE WHEN THEN ELSE END
>>
>>
>>Greetings!
>>
>>I am getting the wrong output, and I don't know how to get it to work.
>>Please take a look at the following (Pardon the lengthy data):
>>create table t0 (n INTEGER PRIMARY KEY, a, b, c, d, e, idate);
>>insert into t0 (a, b, c, d, e, idate) values ('p001', 1, 2019, 'n', 4,
>>'2019-02-11');
>>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>>'2019-02-11');
>>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'n', 4,
>>'2019-02-11');
>>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>>'2019-02-11');
>>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>>'2019-02-11');
>>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>>'2019-02-12');
>>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>>'2019-02-12');
>>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'y', 4,
>>'2019-02-12');
>>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>>'2019-02-12');
>>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>>'2019-02-12');
>>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>>'2019-02-13');
>>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>>'2019-02-13');
>>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2020, 'y', 4,
>>'2019-02-13');
>>insert into t0 (a, b, c, d, e, idate) values ('p004', 5, 2019, 'n', 4,
>>'2019-02-13');
>>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2020, 'y', 8,
>>'2019-02-13');
>>
>>create table t1 (n INTEGER PRIMARY KEY, f, g, h, i, j, idate);
>>insert into t1 (f, g, h, i, j, idate) values ('p001', 1, 4, 'n', 4,
>>'2019-02-11');
>>insert into t1 (f, g,

Re: [sqlite] SQL CASE WHEN THEN ELSE END

2020-01-26 Thread Keith Medcalf

Do you perhaps mean:

  SELECT a.a,
 a.c,
 a.e,
 b.g,
 b.h,
 b.i,
 coalesce((
   SELECT 'YES'
 FROM t2
WHERE wYear == a.c
  AND pid == a.a
  ),  'NO') AS digital
FROM t0 as a, t1 as b
   WHERE a.a == b.f
 AND a.idate == (SELECT MAX(idate) from t0 where a = a.a)
 AND b.idate == (SELECT MAX(idate) from t1 where f = a.a)
 AND a.a IN (SELECT pid FROM t2)
 AND a.c == 2020
;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Jose Isaias Cabrera
>Sent: Sunday, 26 January, 2020 19:44
>To: SQLite mailing list 
>Subject: [sqlite] SQL CASE WHEN THEN ELSE END
>
>
>Greetings!
>
>I am getting the wrong output, and I don't know how to get it to work.
>Please take a look at the following (Pardon the lengthy data):
>create table t0 (n INTEGER PRIMARY KEY, a, b, c, d, e, idate);
>insert into t0 (a, b, c, d, e, idate) values ('p001', 1, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2020, 'y', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 5, 2019, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2020, 'y', 8,
>'2019-02-13');
>
>create table t1 (n INTEGER PRIMARY KEY, f, g, h, i, j, idate);
>insert into t1 (f, g, h, i, j, idate) values ('p001', 1, 4, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 3, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 2, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j,  idate) values ('p004', 4, 5, 'y', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 3, 'y', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p001', 10, 2, 'n', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 6, 'n', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 7, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p004', 4, 1, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 2, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p001', 10, 1, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 8, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 9, 'y', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p004', 5, 3, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 3, 'y', 8,
>'2019-02-13');
>
>create table t2 (pid, WYear);
>insert into t2 values ('p001', 2019);
>insert into t2 values ('p003', 2019);
>insert into t2 values ('p004', 2019);
>insert into t2 values ('p002', 2020);
>insert into t2 values ('p003', 2020);
>insert into t2 values ('p005', 2020);
>
>When I run this SELECT,
>
>SELECT a.a, a.c, a.e, b.g, b.h, b.i,
>(
>CASE
>(
>  SELECT WYear FROM t2 WHERE pid = a.a
>)
>WHEN c.WYear = 2020 THEN “YES”
>ELSE “NO” END
>) AS DIGITAL
>FROM t0 as a, t1 as b, t2 as c
>WHERE a.a = b.f and a.a = c.pid
>AND a.c = 2020
>AND a.idate = (SELECT MAX(idate) from t0 where a = a.a)
>AND b.idate = (SELECT MAX(idate) from t1 where f = a.a)
>GROUP BY a.a
>;
>
>I get,
>
>p001|2020|4|10|1|n|NO
>p003|2020|4|3|9|y|NO
>p005|2020|8|5|3|y|NO
>
>I expected this,
>
>p001|2020|4|10|1|n|NO
>p003|2020|4|3|9|y|YES
>p005|2020|8|5|3|y|YES
>
>I've tried a few changes, but nothing seems to work.  Thoughts please?
>thanks.
>
>josé
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org

Re: [sqlite] SQL CASE WHEN THEN ELSE END

2020-01-26 Thread Keith Medcalf

I get nothing at all except a complaint that the syntax is invalid.  In 
particular 

(
CASE
(
  SELECT WYear FROM t2 WHERE pid = a.a
)
WHEN c.WYear = 2020 THEN “YES”
ELSE “NO” END
) AS DIGITAL

Is not a valid scalar expression.  Parsing fails at "WHEN".  What exactly do 
you intend this scalar expression to do?

(and if the syntax was not invalid, the result would always be NO since it is 
impossible for the result of the subselect (which will always be a 4 digit 
number because that is all that is in t2, or null, if not found (and a random 
year at that since there can be multiple rows with the same pid and you did not 
specify which one you want)) to be equal to the 0 or 1 (true or false) boolean 
expression after the WHEN.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Jose Isaias Cabrera
>Sent: Sunday, 26 January, 2020 19:44
>To: SQLite mailing list 
>Subject: [sqlite] SQL CASE WHEN THEN ELSE END
>
>
>Greetings!
>
>I am getting the wrong output, and I don't know how to get it to work.
>Please take a look at the following (Pardon the lengthy data):
>create table t0 (n INTEGER PRIMARY KEY, a, b, c, d, e, idate);
>insert into t0 (a, b, c, d, e, idate) values ('p001', 1, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'n', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>'2019-02-11');
>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 4, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2019, 'y', 4,
>'2019-02-12');
>insert into t0 (a, b, c, d, e, idate) values ('p001', 10, 2020, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p002', 2, 2019, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p003', 3, 2020, 'y', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p004', 5, 2019, 'n', 4,
>'2019-02-13');
>insert into t0 (a, b, c, d, e, idate) values ('p005', 5, 2020, 'y', 8,
>'2019-02-13');
>
>create table t1 (n INTEGER PRIMARY KEY, f, g, h, i, j, idate);
>insert into t1 (f, g, h, i, j, idate) values ('p001', 1, 4, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 3, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 2, 'n', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j,  idate) values ('p004', 4, 5, 'y', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 3, 'y', 4,
>'2019-02-11');
>insert into t1 (f, g, h, i, j, idate) values ('p001', 10, 2, 'n', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 6, 'n', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 7, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p004', 4, 1, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 2, 'y', 4,
>'2019-02-12');
>insert into t1 (f, g, h, i, j, idate) values ('p001', 10, 1, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p002', 2, 8, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p003', 3, 9, 'y', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p004', 5, 3, 'n', 4,
>'2019-02-13');
>insert into t1 (f, g, h, i, j, idate) values ('p005', 5, 3, 'y', 8,
>'2019-02-13');
>
>create table t2 (pid, WYear);
>insert into t2 values ('p001', 2019);
>insert into t2 values ('p003', 2019);
>insert into t2 values ('p004', 2019);
>insert into t2 values ('p002', 2020);
>insert into t2 values ('p003', 2020);
>insert into t2 values ('p005', 2020);
>
>When I run this SELECT,
>
>SELECT a.a, a.c, a.e, b.g, b.h, b.i,
>(
>CASE
>(
>  SELECT WYear FROM t2 WHERE pid = a.a
>)
>WHEN c.WYear = 2020 THEN “YES”
>ELSE “NO” END
>) AS DIGITAL
>FROM t0 as a, t1 as b, t2 as c
>WHERE a.a = b.f and a.a = c.pid
>AND a.c = 2020
>AND a.idate = (SELECT MAX(idate) from t0 where a = a.a)
>AND b.idate = (SELECT MAX(idate) from t1 where f = a.a)
>GROUP BY a.a
>;
>
>I get,
>
>p001|2020|4|10|1|n|NO
>p003|2020|4|3|9|y|NO
>p005|2020|8|5|3|y|NO
>
>I expected this,
>
>p001|2020|4|10|1|n|NO
>p003|2020|4|3|9|y|YES
>p005|2020|8|5|3|y|YES
>
>I've tried a few changes, but nothing seems to work.  Thoughts please?
>thanks.
>
>josé
>___
>sqlite-users mailing list

Re: [sqlite] importing data to a table that has generated-columns

2020-01-26 Thread Keith Medcalf

On Sunday, 26 January, 2020 10:29, chiahui chen  wrote:

>After creating a table (total 8 columns including 1 generated column), I
>tried to import data from a csv file (each record has values for 7 
>columns that match the non-generated column names and data types, no 
>headers ).

>The system issued "error: table has 7 columns but 8 values were supplied."

>I wonder why.

This is because of the way the .import (currently) command works in the CLI.  

Looking at the code this is how it presently works:

Check if  exists, and if it does not, then create it, optionally using 
the column names from the CSV import file in the order they appear.
Now that the table exists, use "SELECT * FROM " to determine the number 
of columns in the table (which will include computed always columns, if any).
Generate a statement of the form "INSERT INTO  VALUES (? ...)" where 
there is one ? parameter for each column in the table.
Loop through the CSV file and bind the parameters to the above statement, 
executing a sqlite3_step at the end of each row.

You will note that:
1.  Column Names in the CSV are *only* used if  is created.
2.  Column Names in the CSV are *not* used for the INSERT statement.
3.  The number of column names from which you can "SELECT *" is not the same as 
the number of columns you can "INSERT INTO" if the table pre-exists and 
contains generated always columns (which cannot be INSERT INTO'd).
4.  Specifying column names (a header line) does not "match up" those column 
names to the INSERT INTO column names.

>After experimenting different ways to import data to a table that has a
>generated column, so far I only found that  .read command with a .sql
>file that was output  as a result of  'mode insert'  is successful. Is 
>there any other ways to import data to an existing generated-column table?

1. .import into a temporary  then use INSERT INTO  () SELECT * FROM  and then drop the temporary table.
2. include/load the csv extension and use an insert with an explicit column 
list to insert into  from the csv virtual table.

Otherwise, Richard *may* make some changes to the .import logic which *might* 
appear in the next release version.  Otherwise you will have to use one of the 
three noted word-arounds.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.31.0 Request TRUSTED_SCHEMA and pragma's

2020-01-21 Thread Keith Medcalf

On Tuesday, 21 January, 2020 05:28, Richard Hipp  wrote:

>On 1/21/20, Keith Medcalf  wrote:

>> Richard,
>>
>> The TRUSTED_SCHEMA setting works really well but I have noticed one
>> problem (there may be more, but I haven't run across any yet) with 
>> it that is perhaps easy to address, though it needs to be done 
>> properly.  That is perhaps adding an innocuous flag to pragma 
>> definitions in mkpragmatab.tcl so that it can be carried though 
>> into the vtable code that handles pragma_ xConnect 
>> method.
>>
>> This would permit pragma's such as table_info (for example) to be
>> marked as innocuous so that pragma_table_info could be used in a 
>> view even when the schema is untrusted.

> That would potentially leak information about the schemas of other
> attached database files.  It seems like a harmless information leak,
> but it is a leak nevertheless.
>
> If you are setting untrusted schema (as you probably should) but you
> need to use pragma virtual tables inside of triggers and views,
> consider putting them inside TEMP triggers and views.  TEMP triggers
> and views, because they must originate in the application itself, are
> always trusted.

Done, tested, and putting those views in temp works just fine.

And yes, I agree that not marking the pragma vtabs as innocuous is the right 
thing.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Draft Documentation Change

2020-01-20 Thread Keith Medcalf

https://www.sqlite.org/draft/c3ref/vtab_config.html

should also reference SQLITE_VTAB_INNOCUOUS and SQLITE_VTAB_DIRECTONLY

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Error? ext/misc series.c and spellfix.c

2020-01-20 Thread Keith Medcalf

sqlite3_vtab_config(db, SQLITE_INNOCUOUS) should that not be 
sqlite3_vtab_config(db, SQLITE_VTAB_INNOCUOUS)?

Which explains why my quick patch in the pragma.c xConnect code to make all 
pragma vtabs innocuous didn't work (I copied from series.c) :)

  rc = sqlite3_declare_vtab(db, zBuf);
  if( rc==SQLITE_OK ){
pTab = (PragmaVtab*)sqlite3_malloc(sizeof(PragmaVtab));
if( pTab==0 ){
  rc = SQLITE_NOMEM;
}else{
  sqlite3_vtab_config(db, SQLITE_VTAB_INNOCUOUS);  // Make the vtab 
innocuous
  memset(pTab, 0, sizeof(PragmaVtab));
  pTab->pName = pPragma;
  pTab->db = db;
  pTab->iHidden = i;
  pTab->nHidden = j;
}
  }else{
*pzErr = sqlite3_mprintf("%s", sqlite3_errmsg(db));
  }

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 3.31.0 Request TRUSTED_SCHEMA and pragma's

2020-01-20 Thread Keith Medcalf

Richard,

The TRUSTED_SCHEMA setting works really well but I have noticed one problem 
(there may be more, but I haven't run across any yet) with it that is perhaps 
easy to address, though it needs to be done properly.  That is perhaps adding 
an innocuous flag to pragma definitions in mkpragmatab.tcl so that it can be 
carried though into the vtable code that handles pragma_ xConnect 
method.

This would permit pragma's such as table_info (for example) to be marked as 
innocuous so that pragma_table_info could be used in a view even when the 
schema is untrusted.

Whether a directonly flag is required I do not know but, for example, one might 
never want to have pragma_integrity_check used in a view, though I presently 
don't really see any need for that and the behaviour of those pragma vtabs 
might not need changing at all from the current behaviour.

Just some idea's (and I don't know TCL that well, and it would require the 
addition of the flags in the C code, or I would submit some patch ideas myself).

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.




___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can it (should it) be done in SQL?

2020-01-20 Thread Keith Medcalf

On Monday, 20 January, 2020 12:42, David Bicking  wrote:

> Thanks. I figured the solution would use CTE (this is a CTE, isn't it??)
>Unfortunately, they were neither in Sqlite, nor mentioned in any of the
>sql stuff I read when I taught myself to do SQL.so it took me a while to
>figure out how it works.

Yes, it is a recursive CTE.  It calculates the UNK line for the current period, 
then for each additional period for which there is a goal it calculates the 
that periods UNK line.  And adjustment only needs to be applied if the previous 
UNK was less than 0.

>Unfortunately, I extend the goals to cover all 12 months, leavings
>actuals for just periods 1 to 3. The result has UNK lines for periods 4
>to 12, with a null for the aount. I am guessing that this is because the
>in the unks cte it is subtracting the sum from data, but there is nothing
>in data, so it is subtracting a null resulting in a null.

You can fix this by replacing all the sum(amount) with total(amount).  sum 
returns null if there is no sum, total returns 0.0.  Then cast the result back 
to an integer at the end.

Do that with this too

>select amount
>  from goals
>where period == unks.period+1

to make sure the return value is 0.0 rather than null (even though there can 
only be one record, it is the easiest way).

>I was able to put a coalesce around the data sum, and that does work.

>The line strftime('%m') seemed very strange. I guess without a time
>string, sqlite defaults the the current date and time. the sqlite docs
>don't mention this.

strftime('%m') is the same as strftime('%m', 'now'), the 'now' is the default 
if no arguments are provided for the datetime part.  However, what you probably 
want is strftime('%m', 'now', 'localtime') which will return the current month 
at your current timezone location, not at the prime meridian.

A CTE that will handle those cases might look like this (with all explicit 
casts and allowing NULL amounts, and data with no goals, and goals with no 
data):

with first (period)
  as (
  values (cast(strftime('%m', 'now', 'localtime') as integer))
 ),
 last (period)
  as (
  select max((
  select max(period)
from goals
 ), (
   select max(period)
 from data
 ))
 ),
 unks (period, type, amount)
  as (
 select first.period,
'UNK',
(
 select total(amount)
   from goals
  where period between 1 and first.period
) - (
 select total(amount)
   from data
  where period between 1 and first.period
)
   from first, last
  where first.period <= last.period
  union all
 select unks.period + 1,
'UNK',
(
 select total(amount)
   from goals
  where period == unks.period + 1
) - (
 select total(amount)
   from data
  where period == unks.period + 1
) + min(unks.amount, 0)
   from unks, last
  where unks.period < last.period
 )
   select period,
  type,
  amount
 from data
union all
   select period,
  type,
  cast(max(0, amount) as integer)
from unks
order by 1, 2;

You do, of course, need an index on Data(period) to avoid all the table scans.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Find schema of a table in a query

2020-01-20 Thread Keith Medcalf

You are correct that same table names in temp obscure those names from main and 
other attached databases, so your order by addition is required.

  select name
from pragma_database_list as d
   where exists (select *
   from pragma_table_info
  where schema == d.name
and arg == ?)
order by seq != 1, seq
   limit 1;

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Keith Medcalf
>Sent: Monday, 20 January, 2020 02:18
>To: SQLite mailing list 
>Subject: Re: [sqlite] Find schema of a table in a query
>
>
>The "main" database is always seq == 0, the "temp" database is always seq
>== 1, and other databases are seq == 2 and greater in the order they were
>attached.  seq 2 -> whatever is always contiguous.  The table search
>order for unqualified names (when a search is required) is always in the
>seq returned by pragma database_list, so
>
>select name
>  from pragma_database_list as d
> where exists (select *
> from pragma_table_info
>where schema == d.name
>  and arg == 'x')
> limit 1;
>
>will always return the correct schema name, no order by required.  (and
>you can use table_info rather than table_xinfo because a table must
>always have one named column that is not hidden.  If this were not the
>case, then you would have to use table_xinfo to ensure that something is
>returned for that pragma lookup.
>
>SQLite version 3.31.0 2020-01-20 03:22:36
>Enter ".help" for usage hints.
>Connected to a transient in-memory database.
>Use ".open FILENAME" to reopen on a persistent database.
>sqlite> attach database ':memory:' as two;
>sqlite> attach database ':memory:' as three;
>sqlite> attach database ':memory:' as four;
>sqlite> attach database ':memory:' as five;
>sqlite> create table main.x(x);
>sqlite> insert into main.x values (0);
>sqlite> create table temp.x(x);
>sqlite> insert into temp.x values (1);
>sqlite> create table two.x(x);
>sqlite> insert into two.x values (2);
>sqlite> create table three.x(x);
>sqlite> insert into three.x values (3);
>sqlite> create table four.x(x);
>sqlite> insert into four.x values (4);
>sqlite> create table five.x(x);
>sqlite> insert into five.x values (5);
>sqlite> pragma database_list;
>0|main|
>1|temp|
>2|two|
>3|three|
>4|four|
>5|five|
>sqlite> detach database three;
>sqlite> pragma database_list;
>0|main|
>1|temp|
>2|two|
>3|four|
>4|five|
>sqlite> select * from x;
>1
>sqlite> select name
>   ...>   from pragma_database_list as d
>   ...>  where exists (select *
>   ...>  from pragma_table_xinfo
>   ...> where schema == d.name
>   ...>   and arg == 'x')
>   ...>  limit 1;
>main
>sqlite> drop table main.x;
>sqlite> select * from x;
>1
>sqlite> select name
>   ...>   from pragma_database_list as d
>   ...>  where exists (select *
>   ...>  from pragma_table_xinfo
>   ...> where schema == d.name
>   ...>   and arg == 'x')
>   ...>  limit 1;
>temp
>sqlite> drop table four.x;
>sqlite> select * from x;
>1
>sqlite> select name
>   ...>   from pragma_database_list as d
>   ...>  where exists (select *
>   ...>  from pragma_table_xinfo
>   ...> where schema == d.name
>   ...>   and arg == 'x')
>   ...>  limit 1;
>temp
>sqlite> drop table temp.x;
>sqlite> select * from x;
>2
>sqlite> select name
>   ...>   from pragma_database_list as d
>   ...>  where exists (select *
>   ...>  from pragma_table_xinfo
>   ...> where schema == d.name
>   ...>   and arg == 'x')
>   ...>  limit 1;
>two
>sqlite> create table temp.x(x);
>sqlite> insert into temp.x values (1);
>sqlite> select * from x;
>1
>sqlite> select name
>   ...>   from pragma_database_list as d
>   ...>  where exists (select *
>   ...>  from pragma_table_xinfo
>   ...> where schema == d.name
>   ...>   and arg == 'x')
>   ...>  limit 1;
>temp
>sqlite>
>
>--
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users  On
>>Behalf Of x
&g

Re: [sqlite] Find schema of a table in a query

2020-01-20 Thread Keith Medcalf

The "main" database is always seq == 0, the "temp" database is always seq == 1, 
and other databases are seq == 2 and greater in the order they were attached.  
seq 2 -> whatever is always contiguous.  The table search order for unqualified 
names (when a search is required) is always in the seq returned by pragma 
database_list, so

select name
  from pragma_database_list as d
 where exists (select *
 from pragma_table_info
where schema == d.name
  and arg == 'x')
 limit 1;

will always return the correct schema name, no order by required.  (and you can 
use table_info rather than table_xinfo because a table must always have one 
named column that is not hidden.  If this were not the case, then you would 
have to use table_xinfo to ensure that something is returned for that pragma 
lookup.

SQLite version 3.31.0 2020-01-20 03:22:36
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> attach database ':memory:' as two;
sqlite> attach database ':memory:' as three;
sqlite> attach database ':memory:' as four;
sqlite> attach database ':memory:' as five;
sqlite> create table main.x(x);
sqlite> insert into main.x values (0);
sqlite> create table temp.x(x);
sqlite> insert into temp.x values (1);
sqlite> create table two.x(x);
sqlite> insert into two.x values (2);
sqlite> create table three.x(x);
sqlite> insert into three.x values (3);
sqlite> create table four.x(x);
sqlite> insert into four.x values (4);
sqlite> create table five.x(x);
sqlite> insert into five.x values (5);
sqlite> pragma database_list;
0|main|
1|temp|
2|two|
3|three|
4|four|
5|five|
sqlite> detach database three;
sqlite> pragma database_list;
0|main|
1|temp|
2|two|
3|four|
4|five|
sqlite> select * from x;
1
sqlite> select name
   ...>   from pragma_database_list as d
   ...>  where exists (select *
   ...>  from pragma_table_xinfo
   ...> where schema == d.name
   ...>   and arg == 'x')
   ...>  limit 1;
main
sqlite> drop table main.x;
sqlite> select * from x;
1
sqlite> select name
   ...>   from pragma_database_list as d
   ...>  where exists (select *
   ...>  from pragma_table_xinfo
   ...> where schema == d.name
   ...>   and arg == 'x')
   ...>  limit 1;
temp
sqlite> drop table four.x;
sqlite> select * from x;
1
sqlite> select name
   ...>   from pragma_database_list as d
   ...>  where exists (select *
   ...>  from pragma_table_xinfo
   ...> where schema == d.name
   ...>   and arg == 'x')
   ...>  limit 1;
temp
sqlite> drop table temp.x;
sqlite> select * from x;
2
sqlite> select name
   ...>   from pragma_database_list as d
   ...>  where exists (select *
   ...>  from pragma_table_xinfo
   ...> where schema == d.name
   ...>   and arg == 'x')
   ...>  limit 1;
two
sqlite> create table temp.x(x);
sqlite> insert into temp.x values (1);
sqlite> select * from x;
1
sqlite> select name
   ...>   from pragma_database_list as d
   ...>  where exists (select *
   ...>  from pragma_table_xinfo
   ...> where schema == d.name
   ...>   and arg == 'x')
   ...>  limit 1;
temp
sqlite>

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of x
>Sent: Monday, 20 January, 2020 01:27
>To: SQLite mailing list 
>Subject: Re: [sqlite] Find schema of a table in a query
>
>WOW Keith. That’s the sqlite coding equivalent of scoring a hat trick in
>the world cup final. I’ve added an order by to get the solution
>
>select name from pragma_database_list d
>where (select name from pragma_table_xinfo where schema==d.name and
>arg==?1)
>order by seq!=1, seq limit 1;
>
>I’m assuming the temp db is always 1 in the seq column. Can anyone
>confirm that or should I change it to
>
>order by lower(name)!=temp, seq limit 1;
>
>Thanks. I also learned the parentheses are not required for pragma
>functions when there’s no params and alternate syntax when they are.
>
>
>
>From: sqlite-users  on
>behalf of Keith Medcalf 
>Sent: Sunday, January 19, 2020 8:32:06 PM
>To: SQLite mailing list 
>Subject: Re: [sqlite] Find schema of a table in a query
>
>
>On Sunday, 19 January, 2020 01:47, x  wrote:
>
>>Suppose you’re given a query ‘SELECT 1 from tbl’ by a user and you want
>>to know the name of 

Re: [sqlite] Find schema of a table in a query

2020-01-19 Thread Keith Medcalf

On Sunday, 19 January, 2020 01:47, x  wrote:

>Suppose you’re given a query ‘SELECT 1 from tbl’ by a user and you want
>to know the name of the schema that tbl belongs to. What’s the easiest
>way to do this?

>I know sqlite will use temp.tbl if it exists else main.tbl if it exists
>else it will search for the earliest attached schema with a table called
>tbl. Finding that involves the use of PRAGMA database_list and then
>querying each of the associated sqlite_master tables in turn for the
>existence of tbl until you get a match. Is there an easier way?

How about:

select name
  from pragma_database_list as d 
 where (select name 
  from pragma_table_xinfo 
 where schema == d.name 
   and arg == 'x') is not null;

Where you set "arg == 'tablename'" which will return all the schema's in which 
the specified tablename exists.
 
-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bug? SQLITE_DEFAULT_DEFENSIVE and CLI .parameters

2020-01-19 Thread Keith Medcalf

Defining SQLITE_DEFAULT_DEFENSIVE prevents proper working of the CLI .parameter 
commands.

SQLite version 3.31.0 2020-01-19 18:49:07
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .schema
sqlite> .param init
sqlite> .schema
sqlite> .param set :test test
Error: no such table: temp.sqlite_parameters
sqlite> CREATE TEMP TABLE sqlite_parameters(key TEXT PRIMARY KEY, value);
Error: object name reserved for internal use: sqlite_parameters

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] To edit in sqlite3 CLI multiline SQL statements?

2020-01-18 Thread Keith Medcalf

While there are lines to be edited:
  Press up arrow until line is recalled
  Edit the line
  Press the ENTER key to enter that line

Maybe you have to compile your own to include readline (on Linux), but it works 
for me.  Both Linux and Windows.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of Csanyi Pal
>Sent: Saturday, 18 January, 2020 14:31
>To: sqlite-users@mailinglists.sqlite.org
>Subject: [sqlite] To edit in sqlite3 CLI multiline SQL statements?
>
>Hello,
>
>can one edit a multiline SQL statement in the sqlite3 CLI?
>
>Say I entered the following multiline SQL statement:
>
>sqlite> SELECT 1 UNION ALL
>...> SELECT 2 UNION ALL
>...> SELECT 3
>...> ;
>
>Then after a while I want to run it again, then how can I use the bash
>history to get back my multiline SQL statement, to eventually modify it
>and re-run it again?
>
>Is there a way to do this in sqlite3 CLI?
>
>--
>Best, Paul Chany
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can it (should it) be done in SQL?

2020-01-18 Thread Keith Medcalf
Ooops.  Wrong query pasted, should be this one:

with p (period) as (
values (cast(strftime('%m') as integer))
 ),
unks (period, type, amount) as (
select p.period,
   'UNK',
   (
select sum(amount)
  from goals
 where period between 1 and p.period
   ) - (
select sum(amount)
  from data
 where period between 1 and p.period
   )
  from p
 where p.period <= (select max(period) from goals)
 union all
select period+1,
   'UNK',
   (
select amount
  from goals
 where period == unks.period+1
   ) - (
select sum(amount)
  from data
 where period == unks.period+1
   ) + case when unks.amount < 0 then unks.amount else 0 end
  from unks
 where period < (select max(period) from goals)
 )
   select period,
  type,
  amount
 from data
union all
   select period,
  type,
  max(0, amount)
from unks
order by 1, 2;



-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users  On
>Behalf Of David Bicking
>Sent: Saturday, 18 January, 2020 11:20
>To: SQLite Mailing List 
>Subject: [sqlite] Can it (should it) be done in SQL?
>
>I suspect the answer is that it is best to do this in the application
>program. However, the platform I want to use is dumb as a brick.
>It basically can call sqlite3_get_table, mildly reformat the data
>and send it to the display.
>
>Anyway, there are two tables
>CREATE TABLE Goals (period integer primary key, amount int);
>CREATE TABLE Data (period int, type text, amount int);
>
>periods above are 1 to 12, and amounts are zero or positive.
>
>INSERT INTO Goals Values (1,10), (2,10), (3,10);
>INSERT INTO Data Values (1,'A',3), (2, 'A', 5) (2, 'B', 6), (3, 'A', 2);
>
>Okay, last time I tried to send a list to the group it came out
>unreadable. Crossing fingers:
>
>PERIOD | TYPE | AMOUNT
>1  | A    | 3
>1  | UNK  | 7   -- the goal of 10 - data of 3
>2  | A    | 5
>2  | B    | 6
>2  | UNK  | 0   -- goals of 1&2 = 20 - previous lines of 21
>    -- with the negative result changed to 0
>3  | A    | 2
>3  | UNK  | 7   -- goals 1-3 = 30 - previous lines of 23
>
>Hopefully that makes sense. I need to insert a row into the output
>that ends each period at or above the accumulated goals.
>
>A further complication. The "UNK" row only gets inserted for the
>current or future periods. (i.e in two weeks it will be February,
>so the January UNK will be removed and the Feb UNK would
>become 6 (20 - 3 - 11).
>
>The best I got is
>
>select period, type, amt from Data
>UNION
>select period, 'UNK', (select sum(amount) from Goals G
>   where g.period <= goals.period)
> -(select sum(amount) from Data D
>   where d.period <= goals.period)
>from Goals;
>
>But it doesn't pick up the calculated "UNK" values from the prior
>periods.
>nor does it deal with negatives or not calculating UNK from periods in
>the past.
>
>So can this report be done within a query?
>
>Thanks,
>David
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   6   7   8   9   10   >