Re: [sqlite] After column add, what should be done to update the schema?

2020-02-27 Thread Jean-Christophe Deschamps


I use a simple ALTER TABLE ADD COLUMN statement. However, when I use 
the Sqlite Expert, the DDL is not reflected. Even though the column is 
created. I read somewhere that the DDL is really just a copy of the 
text when teh table was created. Is there a way to refresh this?

I can't reproduce the issue with neither Expert v3.5.96 nor v5.3.5.472.
Post a short reproducer.

sqlite-users mailing list

Re: [sqlite] How to determine the column type?

2019-12-14 Thread Jean-Christophe Deschamps

dump+100 (string), pass+1000 (string)

the type information disappears and former integers becomes mere strings

There must be something else going on here:

Z:> sqlite3
SQLite version 3.30.1 2019-10-10 20:19:45
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table T (N integer);
sqlite> insert into T values (1), (2), (3);
sqlite> select N, N+100, typeof(N+100), N+1000, typeof(N+1000) from T;
sqlite> .q


sqlite-users mailing list

Re: [sqlite] Roadmap?

2019-10-27 Thread Jean-Christophe Deschamps

Hi Simon,

Especially since it can't do something more fundamental than STDEV: 
return all surnames starting with the Unicode character 'Å'.

Reconsider as this works fine.


sqlite-users mailing list

Re: [sqlite] using lower function with utf8

2019-09-19 Thread Jean-Christophe Deschamps

I was messing about with this and tried the following in sqlite expert 

select unicode(lower(char(256)));

I was quite surprised when it responded with the correct result 257.

Looking at the sqlite3.c code I can’t see anything that suggests 
sqlite would handle lower() for non-ascii characters so I tried the 
same thing in a c programme and it came back with the expected wrong 
answer 256.

Am I right in saying 256 is the expected sqlite answer?

Can anyone suggest why it works in the former?

Expert v5 (I expect that's what you use) includes Unicode support and 
overrides sqlite3.dll lower().


sqlite-users mailing list

Re: [sqlite] [EXTERNAL] select for power-meter accumulated total readings

2019-08-08 Thread Jean-Christophe Deschamps

Oops, didn't notice the date of said row was out of sequence wrt rowids.

sqlite-users mailing list

Re: [sqlite] [EXTERNAL] select for power-meter accumulated total readings

2019-08-08 Thread Jean-Christophe Deschamps

Beware that row id 6655 isn't correct (total_kwh is suddenly decreasing).
This inconsistancy maybe related to row 6654 missing: manipulated data?
So you'd have to sanitize your data first.

66532019-08-06 22:23:26.000 1494.00
66552019-07-30 22:32:26.000 150.00 <--
66562019-08-07 18:58:17.000 1673.90

sqlite-users mailing list

Re: [sqlite] Explicit "read transaction" with journal_mode=WAL.

2019-07-31 Thread Jean-Christophe Deschamps


Consider a bank which takes an audit every day at 
close-of-business.  This might be declared to be 5pm.  However, 
accounts are continued to be debited and credited all night, due to 
interest being added, ATM transactions, etc..  Nevertheless, the audit 
needs to see a snapshot as of 5pm.

Of course, no bank would be using SQLite for this purpose, because a 
bank would be using a server/client DBMS.  But you get the idea.

I don't believe this can be any close to a real-world scenario, 
client/server architecture set aside.

Being able to issue and process a BEGIN SHARED IMMEDIATE for the read 
lock be in place at exactly 5pm, without ever missing a transaction 
performed from elsewhere at 04:59:59:999.99 nor including a transaction 
commited at 05:00:00:0.001 seems to be an impossible task in practice.

One may find it uncomfortable to ignore the delay between BEGIN is 
issued and when the next SELECT gets the lock set, but in fact you 
would never know either the delay between your program issuing BEGIN 
SHARED IMMEDIATE and the precise moment the lock is actually setup, 
unless under a low-load real-time OS providing explicit garantees on 
various exec times.  And even there, I'm not that sure.

From my remote/naive viewpoint, this is a misuse of a RDBMS relying on 
DIY.  When you want/need to be sure what you are going to read is ante 
 the only serious way is to include a precise enough 
timestamp in data rows and limit the select using it.

Indeed if you would want to do that and if you need to be just on time, 
you would rather use rock-solid:

select  from  ... where timestamp between 

I still fail to imagine a useful use case for such feature.

As Igor shown, A==B and A!=B are indiscernable.

Further in the thread the argument of "symetry" between BEGIN SHARED 
IMMEDIATE and BEGIN IMMEDIATE is only a surface view, because the arrow 
of time is one-way.  A real symetry would be a BEGIN IMMEDIATE TO BE 
COMMITED BEFORE  but that clearly doesn't make any sense.


sqlite-users mailing list

Re: [sqlite] misuse of aggregate function max()

2019-06-21 Thread Jean-Christophe Deschamps

That is what I use now, together with the select as Hick mentioned.

An other version I played with is two indexes, one unique on (id,
ts_from) to find the last version if eol is not null and one unique on
(id, ts_eol) where eol = null to find the current active version of id.

Beware that in almost all contexts, null != null.  Hence your condition 
eol = null will never be satisfied.

Use isnull instead.


sqlite-users mailing list

Re: [sqlite] Row locking sqlite3

2019-03-28 Thread Jean-Christophe Deschamps

You can use a simple 'L' flag on the rows you want locked and add a 
where to don't touch them.

I'm afraid things are more complicated in many real-world cases. 
Locking a single row isn't enough.

What if the UPDATE or DELETE forces deep changes in one or more indices?
What if the UPDATE or DELETE cascades to 1 to N levels?
What if some change launches a trigger that itself changes things 
elsewhere, possibly a LOT of things, that may themselves change several 
other parts of the DB?

In all cases above, if a read operation occurs in the middle of the 
write process, DB integrity is jeopardized.
I haven't looked at the proposed patch but I seriously doubt all of 
this is taken care in all situations.


sqlite-users mailing list

Re: [sqlite] sqlite trig/geometry error

2019-01-02 Thread Jean-Christophe Deschamps

sqlite> select degrees( radians(175) + atan2( sin(radians(90)) *
sin(0.2/6378.14)*cos(radians(-42)), cos(0.2/6378.14) -

postgres=# select degrees( radians(175) + atan2( sin(radians(90)) *
sin(0.2/6378.14)*cos(radians(-42)), cos(0.2/6378.14) -

Are you sure the implementation of the functions degrees() and 
radians() are OK?

When I try this I get the correct result:

select 57.295779513082320876798154814105 * (175 * 
0.01745329251994329576923690768489 +

 atan2(sin(90 * 0.01745329251994329576923690768489) *
 sin(0.2 / 6378.14) * cos(-42 * 0.01745329251994329576923690768489), 
cos(0.2 / 6378.14) -
 sin(-42*0.01745329251994329576923690768489) * sin(-42 * 



That means that the trig functions in SQLIte are working fine, expectedly.

Excuse the large number of pointless decimals, I just copy-pasted the 
values from a calculator.

sqlite-users mailing list

Re: [sqlite] Subject: Re: SQL Date Import

2018-06-03 Thread Jean-Christophe Deschamps

The problem not having a DATETIME field is, however, very simple: When 
reading a foreign database which stores date values as a number, I 
have to guess on how to get back the correct date.

The datatype used is irrelevant w.r.t. this issue. Unless fully 
qualified with convention used and possibly location on Earth, you have 
no way to say it's local time (requires location), Zulu time, Hebrew 
calendar (requires location), islamic calendar (which version?), fiscal 
(which version/country?), TAI, whatever. There exist a large number of 
conventions and variants for denoting datetime.

sqlite-users mailing list

Re: [sqlite] possible bug: select clause column alias shadowing

2018-03-19 Thread Jean-Christophe Deschamps

At 23:36 18/03/2018, you wrote:

In other words, aliases in the SELECT clause are evaluated _after_ the
FROM and WHERE clauses are done.

The order of the SELECT/WHERE clauses in the SQL syntax is misleading;
the actual behaviour would be better represented by something like this:

( FROM test
  WHERE name = 'foo!' )
SELECT || '!' AS name;

I must be misinterpreting:

create temp table t (a int);
insert into t values (1), (2), (6);
select a int, printf('<%5i>', a) fmt from t where fmt like '%>';

int fmt
1   <1>
2   <2>
6   <6>

Here WHERE understands what fmt refers to.

sqlite-users mailing list

Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-17 Thread Jean-Christophe Deschamps

How many tables in your schema(s) use AUTOINCREMENT?

Maybe a dozen uses in DB designs I made for my own use. Zero or very 
few in designs made for others.

My use case of autoincrement is certainly somehow peculiar.

For some of DBs I use daily for managing my own business (I'm 
self-employed) I didn't feel the need to write ad-hoc applications and 
I only use a third-party SQLite manager (SQlite Expert is open 24/7). 
Of course such use demands real care and a lot of fancy constraints, 
triggers, add-on functions, external procedures, etc to be workable. 
This isn't scalable nor usable by anyone else.

Several of my tables are best viewed/edited as LIFOs: the more recent 
entries on top. For instance a table of orders, where you prefer recent 
entries to be on top of the table when viewed by "natural" (ID) order. 
To achieve that effect I use autoincrement and triggers which negate 
the rowid alias at insert. The sqlite-sequence entry is handy to make 
new row get an unused ID which, once negated, will show first when 
viewing the table, albeit there is no more any non-negative ID in the 
actual table.

I wouldn't have the use of autoincrement if my DB manager had a 
settable ORDER BY clause for basic table viewing/editing.

sqlite-users mailing list

Re: [sqlite] Clarification on Storage

2018-02-20 Thread Jean-Christophe Deschamps

At 17:35 20/02/2018, you wrote:

SQLite Expert, if I am not mistaken, does
try to do some magic under the hood to make SQLite function more like
how the SQLite-Expert authors think it should function, rather than
how it actually functions.  So your theory of the difference in output
being due to shenanigans perpetrated by SQLite Expert might be
correct, for all I know.

There is maybe a much simpler reason: SQLite uses a grid component of 
some Delphi library (ACAICT) which refers to the column definition 
rather than individual values' datatype. For instance if a column is 
declared as IMAGE datatype, then the data grid won't display text or 
numeric values, only thumbnails if you've allowed them in options.

The definitive answer will come from Expert support.

Jean-Christophe Deschamps
2891 route de Pouillon
40180 Heugas
06 15 10 19 29  

sqlite-users mailing list

Re: [sqlite] Missing data table

2017-12-16 Thread Jean-Christophe Deschamps

At 14:51 16/12/2017, you wrote:
Just joined the list.  Am a fairly long term user of SQLite Expert 
Personal (my version is - I upgraded to try version 4 a 
while back but the columns in the Data tab were fixed which is no good 
for me, I couldn't drag them wider so I went back to v3), and while I 
built my PC from scratch and am good with a lot of software, I'm by no 
means an expert when it comes to SQL or databases.

The reason for my post is to help my dad out!  We both maintain our 
own databases (Windows 7) - in the left hand pane of SQLite, you have 
the database name (basestation) and the table name (Aircraft) - I 
don't think this list allows snips or I'd send a screenshot but you 
know what it looks like.

Twice now in the last two months, the "Aircraft" table has vanished 
from my dad's database effectively showing no data in the data tab on 
the main window, but the file size has remained the same, suggesting 
the data is in there somewhere but without it the database is 
useless.  Now I know he's obviously unwittingly done something, but 
I'd like to know what he did, how he did it, and if there's any way to 
undo it, should it happen again.  After the first loss, I closed the 
left hand pane so he couldn't accidentally delete the table, but it's 
gone again so something else is amiss.  He uses a laptop with a 
touchpad and I know how temperamental they can be regarding a mouse 
swipe vs a mouse click, especially if you accidentally brush it.

I'm as well a (very) long-term user of Expert (a completely distinct 
product from what the SQLite dev team maintains and offers for free).

When you open Expert and [re]open your database, you should see a + 
sign in the DB tree (left pane) showing you that the DB contains 
elements (tables and vues). When you hit this + sign, it chenges to a - 
sign and the tree expands to list tables and vues; then you can select 
one of them to see its contents in the Data tab.

Since you seem to say that this is something having occured twice I 
believe the problem is just misuse of the tree panel.

If you have (or your dad has) explicitely selected one table and hit 
Del key, a message box popped up to prompt you for confirmation of 
removal of the table. If a table has been deleted it's no longer 
available from Expert nor any SQLite DB manager. I believe it can still 
be recovered by technical means unless the file has been changed later, 
e.g. by attempting to recreate a table.

As Simon advised you can ask Expert's author for support, even if you 
use the free version.


sqlite-users mailing list

Re: [sqlite] "BIGINT" doesn't act like INTEGER on table creation [Bug]

2017-11-24 Thread Jean-Christophe Deschamps

At 00:13 25/11/2017, you wrote:

Looks like I do need to use AUTOINCREMENT after all, otherwise the
framework I'm using doesn't appear to recognize that INTEGER PRIMARY KEY
columns are allowed a NULL value on insert.

I can't answer about just INTEGER PRIMARY KEY columns, but any table 
with an AUTOINCREMENT column has an entry in table sqlite_sequence, 
something much easier to deal with than digging into sqlite_master.


sqlite-users mailing list

Re: [sqlite] "BIGINT" doesn't act like INTEGER on table creation [Bug]

2017-11-24 Thread Jean-Christophe Deschamps

At 23:49 24/11/2017, you wrote:

On 11/24/17 5:26 PM, Jean-Christophe Deschamps wrote:

At 22:38 24/11/2017, you wrote:
One proof of the falsehood of your assertion is that we CAN fill a 
database with some data using UIDs, and we will almost certainly not 
get a collision, while you assertion we will.

This is an attempt at "proof by example". Keith is perfectly right 
--mathematically speaking-- and your "proof" doesn't hold water, I 
mean as a formal proof.  The best proof that your "proof" isn't a 
proof is that you feel obliged  to add "almost certainly".

DISproof by example is a perfectly valid method. If someone makes a 
claim that something is ALWAYS true, ONE counter example IS a 
disproof. I said almost certainly as the chance of a collision isn't 0 
(to be able to say with certainty) but is most defintely less than the 
100% claimed.

You're confusing one mathematical theorem and one practical statement. 
The first is the _mathematical_ fact that any PRNG (using any fixed 
number of random bits, which is what xUIDs are) will provide an 
infinite number of collisions with probability 1. This is definitely 
true. Of course here, the number of samples is implicitely infinite.

Your practical statement is that you can "most certainly" ignore the 
possibility of collision when feeding 2^N xUIDs into a unique column 
without loosing sleep. That's good enough in practice. The issue with 
your "demonstration" is that 2^N is bounded, whatever finite N you 
choose. Hence you don't contradict what Keith said, you just say 
something different applying to restricted cases. You're speaking about 
practice, while Keith told about math. You're both right, each from his 
own point of view. But you can't claim to disproof a trivially true 
theorem this way, by changing its premices.

An event with probability 10^-10...000 (any finite number of 
zeroes) will occur at least once, provided you run enough tries. It'll 
occur an infinite number of times if you run an infinite number of 
tries. Else its probability would be zero.

Your "disproof" amounts to say that 10^-10...000 = 0

And neither Keith nor I ever said that an xUID collision will occur 
with probability 1 after 2^64 samples. That would be false and that's 
why people feel free to use xUIDs _AND_ sleep quietly.


sqlite-users mailing list

Re: [sqlite] "BIGINT" doesn't act like INTEGER on table creation [Bug]

2017-11-24 Thread Jean-Christophe Deschamps

At 22:38 24/11/2017, you wrote:
One proof of the falsehood of your assertion is that we CAN fill a 
database with some data using UIDs, and we will almost certainly not 
get a collision, while you assertion we will.

This is an attempt at "proof by example". Keith is perfectly right 
--mathematically speaking-- and your "proof" doesn't hold water, I mean 
as a formal proof.  The best proof that your "proof" isn't a proof is 
that you feel obliged  to add "almost certainly".

If SQLite was coded so that "SELECT 3.1415926" would "almost certainly" 
return the expected value you just wouldn't use it. Same thing can be 
applied to, say, a the classical Hello world! program and a C compiler, 
but in this case experience teaches us to be even much more careful!

I agree that you or anyone else _may_ consider the odds of UUID 
collision(s) rare enough to ignore the issue and this is examplified in 
practice by a huge number of systems using UUIDs or similar things. But 
this doesn't make a proof of anything. That's the difference between 
theory and practice. Search for a good quote in this list about theory 
and practice ;-)


sqlite-users mailing list

[sqlite] Feature request

2017-01-15 Thread Jean-Christophe Deschamps

Dear list,

I often have to use SQLite strftime() to compute a week number but the 
only proposed format '%W' causes problems.
Not only that north-american week number has a varying range [00..52] 
or [01..53] but some years (e.g. 2012, 2040) yield a result in 
[00..53], making those years 54 weeks.

This is a major inconvenience for a number of applications, since 
deriving the ISO week number from an ISO date as part of a [sub]query 
or condition is a real pain and slows down things significantly.

Would it be possible to add the '%V' format (ISO week number in 
[01..53]) in some future release?

Jean-Christophe Deschamps
2891 route de Pouillon
40180 Heugas
06 15 10 19 29  

sqlite-users mailing list

Re: [sqlite] Why this query plan?

2017-01-12 Thread Jean-Christophe Deschamps

At 15:13 12/01/2017, you wrote:

Re: "I read this as a provocative joke."

I didn't read it as just a joke.

The analogy with random fonts, etc. breaks down, I think, because
randomizing the ordering would be an attempt to *improve* sqlite's
usability -- not some pedantic punishment.

I read this, as well as Hick previous reply. I'm well aware of the 
issue, which is in no way specific to SQLite.

Yet, providing some new SQLite build (source, amalgamation binaries) 
someday where the result order would be willingly random or different 
from the current behavior (call it natural or naively expectable or 
intuitive or whatelse) will break uncountable uses where the app isn't 
open to change. Remember that in many situations SQLite is being used 
as a loadable component either because the original code was designed 
so or because the language used can't statically link.

So it could be an improvement for *-future-* SQLite apps, or rather a 
good reminder aimed towards developpers, but that would potentially 
break gazillions legacy uses or at the very least cause a huge lot of 
unnecessary inconveniences. Expect a tsunami of disapprovals.

If a user has problems with her sqlite output early in the process, 

to the discovery of a missing "ORDER BY" clause, the argument is that she
has been dealt a favor.  It's vastly worse for her to encounter a
mysterious bug when the sqlite version is updated years from now to one
which (perfectly correctly) returns a different ordering for that same

Further, Dr. Hipp and his team won't have to deal with howls of "it's
broken" when such a version is released.

I also have to repeatedly point out in the community where I offer 
support that SQL deals with unordered sets and to the consequence, that 
issuing the very same SELECT twice in a row could rightfully return 
results in different orders when no ORDER BY clause is specified.  But 
I bet such an uncalled change (as salutary as it may be from a rational 
point of view) would result in a long term continuous higher saturation 
of this list and other support channels with posts from 
questionning/angry/disappointed users.

sqlite-users mailing list

Re: [sqlite] Why this query plan?

2017-01-12 Thread Jean-Christophe Deschamps


At 02:00 12/01/2017, you wrote:

The "PRAGMA reverse_unordered_selects=ON" statement has long been
available to do this.  But it is an optional feature that has to be
turned on.  And I don't think anybody ever turns it on.  My proposal
is to make it random.

Maybe it would be sufficient to initialize the
reverse_unordered_selects setting to a random value (on or off) inside
of sqlite3_open().

I read this as a provocative joke.

While I agree with you that way too many users and applications blindly 
(naively?) rely on the current behavior, willfully making the order 
more or less random by default would be similar, say for a spreadsheet 
app, to choose random font, size, centering, coloring and formatting of 
any cell where those attributes have not been explicitely set.

Ask yourself, but I for one wouldn't make much use of such a 
spreadsheet app, even if some standard says it's legitimate behavior.

If you ask somebody to enumerate strictly positive integers less than 
6, 99.999% of people expect the answer to be 1, 2, 3, 4, 5. While 2, 5, 
4, 1, 3 is a perfectly valid answer, anyone would ask "Why this funny 

sqlite-users mailing list

Re: [sqlite] Bug: Problem with ORDER BY UPPER(...) in conjunction with UNION

2016-12-26 Thread Jean-Christophe Deschamps

At 00:45 27/12/2016, you wrote:

The work arounds is using a WITH clause or putting the upper function
expression in the output of each select.

Another way to rewrite is to wrap the compound select inside a simple 
outer select:

select n
  select 'Abc' n
  select 'aaa' n
order by upper(n)

sqlite-users mailing list

Re: [sqlite] Weird chars inserted

2016-12-18 Thread Jean-Christophe Deschamps


On 12/18/2016 7:17 AM, Jean-Christophe Deschamps wrote:

Since your DB is UTF-16LE encoded, you shouldn't convert your strings to

int nRet= sqlite3_exec(m_hDB, szSQL, NULL, 0, );

alone should work fine.

No it should not. The encoding of the database is irrelevant here: 
sqlite3_exec still expects a UTF-8 encoded string.

Oops, of course that's it. I'm so used to have this kind of mistake 
reported about _SQLite3_Exec() using the wrapper that I use and support 
that I didn't realize the OP was using the bare SQLite API.

sqlite-users mailing list

Re: [sqlite] Weird chars inserted

2016-12-18 Thread Jean-Christophe Deschamps


At 02:38 18/12/2016, you wrote:

Hi people. I need some help I’m lost here.
I’m writing an application using SQLite and whenever I insert 
Spanish characters

I get this kind of strings:
Mart�n (where it should read ‘Martín’)

Ok, so if I open my database with SQLiteManager (the Firefox addin) my 
strings look
weird like that. But SQLiteManager is able to insert and display any 
string correctly.
And any string inserted by SQLiteManager displays all right in my app 
as well.

So I must be doing something wrong when I insert the strings. This is 
my code:

Note: szSQL is the wchar-string my app uses

char szAux[2048];
ZeroMemory(szAux, 2048);
wcslen(szSQL), szAux, 2048, NULL, 0);

int nRet= sqlite3_exec(m_hDB, szAux, NULL, 0, );

SQLiteManager tells me that my db is utf-16le. Im using vs2015 in a 
Windows10 64bit machine.

Since your DB is UTF-16LE encoded, you shouldn't convert your strings 
to UTF8.

int nRet= sqlite3_exec(m_hDB, szSQL, NULL, 0, );

alone should work fine.

sqlite-users mailing list

Re: [sqlite] Problem with CASE in WHERE clause

2016-12-05 Thread Jean-Christophe Deschamps

Hi Simon,

At 11:41 05/12/2016, you wrote:

On 5 Dec 2016, at 7:48am, Jean-Christophe Deschamps <> 

> The choice of literals representing true and false is merely cosmetic.

You got me interested in that.  I had thought that "TRUE" and "FALSE" 
were reserved words in SQLite.  But I can find nothing to back that up, and


returns an error.  It’s too late to add them now, of course, for 
backward compatibility reasons.  Someone may have a table column 
called "false".


I'm as surprised as you about this, but it isn't the point I wanted to 
BTW SQLite generally does a pretty good job at sorting out reserved 
words used as keywords vs. keywords used as schema names, but I always 
recommend that double quotes surround reserved names used as schema names.

I meant that we could call the truth of a boolean expression 
'STAINLESS' or 'RASPBERRY' instead of True and False, or 1 and 0. The 
symbols or literals we use for expressing a boolean value is just a 
convention. I wasn't talking especially about SQLite nor SQL (nor any 

Look at the various incompatible conventions for expressing boolean 
values as "boolean-codepage nightmare" in that it reproduces, in the 
{false, true} domain, exactly the same issues codepages have created in 
character sets.


sqlite-users mailing list

Re: [sqlite] Problem with CASE in WHERE clause

2016-12-05 Thread Jean-Christophe Deschamps

At 12:18 05/12/2016, you wrote:

From: Jean-Christophe Deschamps
Sent: Monday, December 05, 2016 9:48 AM
To: SQLite mailing list
Subject: Re: [sqlite] Problem with CASE in WHERE clause

> At 06:29 05/12/2016, you wrote:
> >My app supports sqlite3, SQL Server, and PostgreSQL.
> >
> >SQL Server has a ‘bitâ’ data type, which accepts 1/0 and
> >‘1â€Ëœ1’/’0’ as valid values.
> >
> >PostgreSQL stgreSQL has a ‘bool’ data type, which supports a 
var a variety of
> >values ­ TRUE, ‘t’, ‘true€˜true’, ‘yy’, 
‘yes’, ‘oes’, ‘on’,
> >‘1’ for true, and the opposithe opposites for false, but 
does not allow 1/0.

> All [three] engines should support (1=1) and (1=0) for true and false,
> respectively, as well as bare columnname as a boolean assertion, like
> Simon said: select ... where columnC and not columnF ...
> The choice of literals representing true and false is merely cosmetic.

So if I understand correctly, it makes sense to use ‘1’/’0’ to 
*set* the boolean value in a cross-database manner, but there are a 
variety of ways to test for it.


That's not how I see that. I'm no expert in SQL standards (note the 
plural!) but AFAICT the only sure and guaranteed portable way to SET a 
boolean value in SQL --regardless of whether a particular engine offers 
a BOOLEAN datatype and the values it consider valid to represent the 
logic valuations of what we call True and False-- is the result of a 
known true or known false expression, like (3=3) and (2=5).

As you've found, some engines accept something in
'TRUE', 'True', 'true', 't', 'T', 'Y, 'y', '1', 1, TRUE, True, true, ...
'FALSE', 'False', 'false', 'f', 'F', 'N', 'n', '0', 0, FALSE, False, 
false, ...

It may make sense to use '1' and '0' in your precise use case but I'm 
unsure of the portability. An expression yielding a known boolean 
result is forcibly valid and correctly interpreted.

And yes,
select ... where columnname = (1=1)
is mouthful for
select ... where columnname

I don't believe any engine would interpret the last statement as
select ... where columnname is not null

sqlite-users mailing list

Re: [sqlite] Problem with CASE in WHERE clause

2016-12-04 Thread Jean-Christophe Deschamps

At 06:29 05/12/2016, you wrote:

My app supports sqlite3, SQL Server, and PostgreSQL.

SQL Server has a ‘bit’ data type, which accepts 1/0 and 
‘1’/’0’ as valid values.

PostgreSQL has a ‘bool’ data type, which supports a variety of 
values ­ TRUE, ‘t’, ‘true’, ‘yy’, ‘yes’, ‘on’, 
‘1’ for true, and the opposites for false, but does not allow 1/0.

All [three] engines should support (1=1) and (1=0) for true and false, 
respectively, as well as bare columnname as a boolean assertion, like 
Simon said: select ... where columnC and not columnF ...

The choice of literals representing true and false is merely cosmetic.


sqlite-users mailing list

Re: [sqlite] Database corruption, and PRAGMA fullfsync on macOS

2016-11-15 Thread Jean-Christophe Deschamps

At 22:41 15/11/2016, you wrote:

So if you're truly worried about flush-to-disk what do you do 
?  Solution 1 is to buy hard disks rated for servers -- sometimes 
called "enterprise-class hard drives" -- and to set the DIP switches 
to tell them they're being used on a server.  Those things are 
intended for use in servers, and ACID can be important to a server, so 
they support it properly and do not lie.

It's even both best and worse than that.

I enjoy running an old and ugly diy machine with 8 SAS HP disks (15k 
rpm) arranged in RAID6 behind a serious hardware controler having a 
good share of battery-backed RAM. Those enterprise-class disks don't 
have any switch and will lie to you and the OS as much as they can if 
you let them do and don't ask for acknowledgement that each write is 
final, but the key piece is the Areca controler which actually manages 
the RAID array and knows when/if some write has actually landed on 
surface. Should the power supply vanish, it keeps a track of what still 
needs to be written and will silently update the disks when power comes 
back, even before the OS is loaded.

So no, even very expensive hard disks themselves don't take any step to 
avoid lying to you/OS if you/OS don't care over "details", but the 
controler (if a serious one) will fill the gap and insure that 
everything is fine.

To be honest I'm almost certain that there can exist extreme situations 
where, for instance, the RAID array is in tatters (e.g. more than 2 of 
6 disks simultaneously failing) and won't accept writes while the 
controler battery is going dangerously low.
But if your needs are thusly mission critical you probably can afford 
enough redundant sophisticated hardware to make the probability of a 
disaster (e.g. a one-bit flip) as low as required.

sqlite-users mailing list

Re: [sqlite] Typical suffixes of sqlite database files

2016-10-20 Thread Jean-Christophe Deschamps

At 01:17 21/10/2016, you wrote:

What suffixes to sqlite database files do you use or see in the wild?

I routinely/always use .sq3 but that's only a personal convention.

sqlite-users mailing list

Re: [sqlite] Protecting databases

2016-10-08 Thread Jean-Christophe Deschamps


At 09:18 08/10/2016, you wrote:
Password protecting it is also good on many levels - if the database 
is to be used online then it is needless to say that authentication 
would be required for various people to view it.

SQLite can't be put "online" per se. It will then be the duty of the 
host software layer (website software or equivalent using some other 
protocol) to accept only users having been granted access.

Even if I decide to make it local only, there is the possibility that 
anyone sharing the computer or network may peruse the database when 
you don't want them to.

That's the reason why OSes provide user login and file access rights. 
Again this is external to SQLite and its DB file(s). Also please note 
that SQLite is not a client-server engine, so access thru a LAN is 
prone to fail due to network file locking being poorly implemented.

Alternatively you can also use strong DB encryption with or without 
adding the authentication function. Google SQLite encryption and you'll 
find a number of answers, some of them correct.


sqlite-users mailing list

Re: [sqlite] Locking databases - Possibly (probably?) a dumb question

2016-08-07 Thread Jean-Christophe Deschamps


At 18:27 07/08/2016, you wrote:

Too little sleep and far too much coffee.

I was in the same situation, multiplying by 2 instead of dividing, as 
Ryan pointed out.

Nice to see that WAL fits your use case. I for one found it rock solid 
and very useful.


sqlite-users mailing list

Re: [sqlite] Locking databases - Possibly (probably?) a dumb question

2016-08-07 Thread Jean-Christophe Deschamps


At 08:55 07/08/2016, you wrote:
We think that using WAL mode works for us, indeed inspection seems to 
indicate it does, but the size of the -wal file appears to be far 
larger than would be expected. Is there a problem here? It doesn't 
appear to be a problem but would welcome any comments.

After reading your post I'd like to clear up a few points about WAL mode.

We can also see that the main sqlite database is NOT updated (or at 
least the timestamp isn't) whilst we are running the updates in WAL 
mode. This appears to be correct as the updates would be in the -wal file.

The WAL mode is persistant and consistant. That means that once 
successfully put in his mode the DB itself will remain in WAL mode for 
every (new) connection. Thus your updates and the long-running query 
are both running under WAL mode. That is, provided the WAL mode was set 
prior to the start of the long-running query, but that detail doesn't 
matter for reads in this case.

It doesn't matter whether your query is a single query statement (hence 
in auto-commit mode) or a huge transaction extracting and massaging 
data in multiple temp tables and myriads of read/write statements, all 
inside an explicit transaction), ACID properties guarantee that once 
your query is started, it will see the DB in the state prior to any 
updates that could occur during its run. Else you would obtain 
potentially dangerously inconsistant data of course.

We have not set the journal_size_limit and we have a -wal file which 
is 224MB in size, somewhat larger than 4MB. We are running

3.8.2 2013-12-06 14:53:30 27392118af4c38c5203a04b8013e1afdb1cebd0d

which does not appear to have the code in 3.11.0 so that the WAL file 
is proportional to the size of the transaction. From the same page of 
the manual:

Very large write transactions.

A checkpoint can only complete when no other transactions are running, 
means the WAL file cannot be reset in the middle of a write 
transaction. So a large change to a large database
might result in a large WAL file. The WAL file will be checkpointed 
once the write transaction completes
(assuming there are no other readers blocking it) but in the meantime, 
the file can grow very big.

As of SQLite version 3.11.0, the WAL file for a single transaction 
should be proportional in size to
the transaction itself. Pages that are changed by the transaction 
should only be written into the WAL
file once. However, with older versions of SQLite, the same page might 
be written into the WAL file multiple

times if the transaction grows larger than the page cache.

Reread the quoted part again: only with SQLite versions 3.11.0 and 
above will a given page be written only once in the WAL file when 
initiated within a given transaction.

Since you're running a prior version, it's pretty logical to expect 
that your updates will cause writes of multiple distinct copies of the 
same pages in the WAL file. You should update your SQLite version to 
see a possible improvement there. That, or refer only to the old 3.8.2 
documentation, but this is an inferior option because there have been 
significant improvements meanwhile.

Also the ratio in the help file (1000 pages or about 4Mb) applies to 
the default page size (4Kb).

Finally, while the long-running query is running, no checkpoint can run 
to completion. Doc states under "Checkpoint starvation.":

However, if a database has many concurrent overlapping readers and 
there is always at least one active reader, then no checkpoints will 
be able to complete and hence the WAL file will grow without bound.

Since you clearly can't introduce a read-gap inside your read query, 
the .wal file will grow as large as it needs until completion of the 
query. You mentionned that you tested with much more frequent updates 
than the real-world case (120x actually), so .wal file size shouldn't 
be an issue in your actual use case.



sqlite-users mailing list

Re: [sqlite] Trigger slowness even when it's not fired

2016-07-04 Thread Jean-Christophe Deschamps

At 10:26 04/07/2016, you wrote:


I have a curious situation involving a trigger, which I’m at a loss 
to explain. I’m wondering if someone who knows more about the 
insides of SQLite can tell me more about why it’s happening. I’m 
running SQLite 3.8.7.

The trigger code is at the bottom of this email. It’s a 
straightforward AFTER UPDATE trigger, firing when any of three fields 
is updated and when the old value is not the same as the new value for 
any of those fields. There’s also a test I’ve put in there to make 
sure that some tables which I’m looking to update are not empty.

The table TriggerLog is a table I’ve added for testing. If the 
trigger fires, a row is inserted into the log. I have tested that this 
works when the trigger fires.

There are then a number of UPDATE statements to set flags on other 
tables. In the circumstances I’m running this in at the moment, 
there are no rows in QuoteCalc, SaleCalc, TxnCalc etc., so the 
trigger’s code won’t actually fire. I have verified this by 
inspecting TriggerLog after the text; there are no rows reporting that 
ContactCalcUpdate8 has fired.

During the test, a row on ContactCalc has one of its 
ContactCalc_Phone, ContactCalc_Email or ContactCalc_Website fields 
updated. This occurs on various rows of ContactCalc approximately 
5,000 times. This stage of the test takes approximately 22 seconds.

If I remove all of the UPDATE rows from this trigger, and make no 
other changes, the test takes approximately 12 seconds.

I don’t understand why removing code from the body of a trigger 
which doesn’t fire makes things faster.

I have also tried replacing the WHERE clauses of all of the UPDATE 
statements with WHERE 1=0 to eliminate the possibility of the IN 
statement being the culprit; it’s still slow.

I would very much like to understand what’s going on here; perhaps 
someone can enlighten me.



// When ContactCalc's phone, email or website changes, update the 
phone, email and website fields in

// QuoteCalc, SaleCalc, PurchaseCalc and TxnCalc.
// Also update GroupHead phone, email and web fields for QuoteCalc, 
SaleCalc, PurchaseCalc, TxnCalc

OF ContactCalc_Phone,
ON ContactCalc WHEN

Old.ContactCalc_Phone != New.ContactCalc_Phone
Old.ContactCalc_Email != New.ContactCalc_Email
Old.ContactCalc_Website != New.ContactCalc_Website


SELECT COUNT(QuoteCalc_QuoteUUID) FROM QuoteCalc
) > 0
) > 0
SELECT COUNT(PurchaseCalc_PurchaseUUID) FROM PurchaseCalc
) > 0
) > 0


INSERT INTO TriggerLog( TriggerLog_Name ) VALUES 

UPDATE QuoteCalc
SET QuoteCalc_UpdateContactFlag = 1
WHERE QuoteCalc_ContactUUID = New.ContactCalc_ContactUUID;

SET SaleCalc_UpdateBillingContactFlag = 1
WHERE Sale_BillingContactUUID = New.ContactCalc_ContactUUID

SET SaleCalc_UpdateShippingContactFlag = 1
WHERE Sale_ShippingContactUUID = New.ContactCalc_ContactUUID

SET TxnCalc_UpdateContactFlag = 1
FROM TxnCalc
WHERE TxnCalc_ContactUUID = New.ContactCalc_ContactUUID

UPDATE PurchaseCalc
SET PurchaseCalc_UpdateContactFlag = 1
WHERE PurchaseCalc_PurchaseUUID IN
SELECT PurchaseCalc_PurchaseUUID
FROM PurchaseCalc
WHERE PurchaseCalc_ContactUUID = New.ContactCalc_ContactUUID

UPDATE ContactCalc
SET ContactCalc_UpdateGroupHeadFlag = 1
WHERE ContactCalc_ContactUUID IN
FROM Contact
WHERE Contact_LinkedContactUUID = New.ContactCalc_ContactUUID

UPDATE QuoteCalc
SET QuoteCalc_UpdateGroupHeadFlag = 1
WHERE QuoteCalc_QuoteUUID IN
FROM Quote,
WHERE Quote_ContactUUID = Contact_UUID
AND Contact_LinkedContactUUID = New.ContactCalc_ContactUUID

SET SaleCalc_UpdateBillingGroupHeadFlag = 1
FROM Sale,
WHERE Sale_BillingContactUUID = Contact_UUID
AND Contact_LinkedContactUUID = New.ContactCalc_ContactUUID


SET SaleCalc_UpdateShippingGroupHeadFlag = 1
FROM Sale,
WHERE Sale_ShippingContactUUID = 

Re: [sqlite] How to use "cursors" in c#

2016-06-28 Thread Jean-Christophe Deschamps
As another further response and if you use automatic INTEGER PRIMARY 
KEY IDs, then the sqlite3_last_insert_rowid() call will tell you 
unambiguously, provided the call is made within an explicit transaction 
initiated before the insert.

As a further response, regarding that Martina said "insert ... new row 
and get back the id of this last inserted row", I believe that this 
likely reflects a poor design.  Wherever possible, every field of a 
row to insert including its identifiers should be known BEFORE 
inserting the row, and that way, you don't need to ask afterwards what 
the id is, you already know because it is the value you told it to use 
when doing the insert. -- Darren Duncan


sqlite-users mailing list

Re: [sqlite] sqlite3_column_origin_name for INSERT and UPDATE

2016-06-18 Thread Jean-Christophe Deschamps

My first instinct is to add a Units column to any table with a
measurement.  The value in Units would be a foreign key to a Units
table.  Another table, Conversions, could hold conversion factors.

In addition to what jkl said, I'd like to point out that if you may 
someday find yourself forced to deal with many units you'll have to 
make the unit column homogenous. Easy example is mixing cm/s, km/h and 

In one of my SQLite DB I have to deal with almost all SI units and a 
few more exotic, plus input and output values with prefies (kilo, deci, 
micro, ...). To maintain consistency I express every value in SI units 
and store the unit in dimensional form in its own column. I had to add 
dimensionless non-units like radian and steradian, which are now 
disregarded by the SI system but are needed to maintain semantic 

So for instance I convert pressure inputs expressed in Pa, mmHg or bar 
to their equivalent SI value in kg m^-1 s^-2 and store the values along 
dimensional unit in L^-1 M T^-2. Now I can safely proceed to any needed 
computation without worrying to avoid mixing apples and chairs, then 
convert the result into expected practical unit, say megapounds per barn.

Of course your use case may not require that much precaution.

sqlite-users mailing list

Re: [sqlite] Latin-1 characters cannot be supported for Unicode

2016-06-08 Thread Jean-Christophe Deschamps

At 09:22 08/06/2016, you wrote:

A 3 Byte Sequence 0xFFFEC4 when converting 0xC4 to UTF-8 in UltraEdit

This 3-byte sequence is neither UTF8 or UTF16 even if the BOM would 
make us believe it is UTF16-LE. UTF16 implies 16-bit encoding units, so 
an odd byte length is impossible.

You probably meant FF FE C4 00 for UTF16-LE.

sqlite-users mailing list

Re: [sqlite] UPDATE statement without FROM clause

2016-06-05 Thread Jean-Christophe Deschamps

At 23:34 04/06/2016, you wrote:

On 4 Jun 2016, at 10:15pm, Jean-Christophe Deschamps 
<> wrote:

> Can't the same update be done more efficiently with a CTE?

The command inside the WITH has to be a SELECT command.

Definitely not as Ryan pointed out, and as the help file clearly states 

"All common table expressions (ordinary and recursive) are created by
 prepending a WITH clause in front of a 
<>DELETE, or 


I agree that in that simple example, using a CTE is just syntactic sugar:

  Item1 INT,
  Item2 CHAR,
  item3 CHAR);
with ints (n) as (select -5000 union all select n+1 from ints limit 1)
INSERT INTO A (id) select n from ints;

  Item1 INT,
  Item2 CHAR,
  item3 CHAR);

-- make table B bigger (IDs in A and B only partl overlap)
with ints (n) as (select (select count(*) from B)+1 union all select 
n+1 from ints limit 1)

INSERT INTO B (id) select n from ints;

with C as (select id, item1, item2, item3 from B where item1 not null)
update A set
   item1 = (select C.item1 from C where =,
   item2 = (select char(C.item1) from C where =,
   item3 = (select group_concat(item2, '') from A AA where 

where id in (select id from C);

select * from A where item2 not null;

sqlite-users mailing list

Re: [sqlite] UPDATE statement without FROM clause

2016-06-04 Thread Jean-Christophe Deschamps

At 18:18 04/06/2016, you wrote:


why? At the moment I have to run something like:


Using a FROM clause I just need one scan through B (at least in 
principle). Now, I need N+1 scans.


Can't the same update be done more efficiently with a CTE?

sqlite-users mailing list

[sqlite] Messages posted on Nabble not getting to list

2016-05-27 Thread Jean-Christophe Deschamps

At 03:46 27/05/2016, you wrote:
>If SQLite goes this route, I will probably (as with the others) stop 
>reading it too.


Re: [sqlite] Messages posted on Nabble not getting to list

2016-05-26 Thread Jean-Christophe Deschamps

At 03:46 27/05/2016, you wrote:
If SQLite goes this route, I will probably (as with the others) stop 
reading it too.


sqlite-users mailing list

[sqlite] application for initial charge

2016-04-09 Thread Jean-Christophe Deschamps
At 07:14 09/04/2016, you wrote:
>I have implemented an application with Lazarus+FPCompiler, using SQLite
>tables, and the information was obtained from Excel tables previously 
>I hope it would be very simple to prepare in the former Excel tables a 
>with the information required by my application and "charge it" instead of
>the manual introduction as now.
>Please, if somebody knows something about this aspect, I will appreciate
>very much any information.

I would use a powerful scripting language for that, AutoIt, which makes 
it very easy to interface SQLite and automate Excel as well.


[sqlite] Correlated subquery refers to wrong rowid - bug?

2016-03-09 Thread Jean-Christophe Deschamps
At 16:14 09/03/2016, you wrote:
>On 3/9/2016 9:58 AM, R Smith wrote:
>>On 2016/03/09 4:35 PM, Igor Tandetnik wrote:
>>>Yes, but why is that a problem? It is perfectly legal, and often
>>>useful, for a subquery to refer to columns from enclosing query.
>>>That's what makes it a *correlated* subquery.
>>True, but the OP's result is still in error.
>Yes, I realize that. I was only arguing against the proposition that 
>the query is syntactically invalid and should have been rejected on 
>these grounds.
>Igor Tandetnik

Sorry guys, I don't know why I wrote that. In fact I know: I shouldn't 
be talking over the phone while reading the list.

Of course I use correlated subqueries a lot, but never had to re-select 
a column from the enclosing query. Indeed, I'd say that most of the 
time one uses columns from enclosing query as expressions, e.g. in 
comparison operators or functions in a where clause.

The OP query is pretty uncommon since it's guaranteed to produce an 
empty result, hence it's no surprise that the bug was so old.

[sqlite] Correlated subquery refers to wrong rowid - bug?

2016-03-09 Thread Jean-Christophe Deschamps
At 15:16 09/03/2016, you wrote:
>   select id from a where id not in (select from b);

Shouldn't the engine bark on this, like it does on the modified version:

select id from a where id not in (select from b);

"no such column"

As I understand it, there is no more an column in table b. It 
looks like SQLite is trying to get clever ignoring the "a." qualifier.

[sqlite] Bug: Successfully committed transaction rolled back after power failure

2016-02-01 Thread Jean-Christophe Deschamps
At 17:55 01/02/2016, you wrote:
>The above about implementation of RAID is good. There were battery
>backed up caching controllers 20 years ago. In the event of a power
>loss, the cached writes could be completed later.

I run such one RAID Areca controller with 8 server disks in RAID6. Even 
after trying hard to make writes fail, I never had a single incident in 
many years. I'm sure many other users still run such setups today, no 
need to go that far back in time.

[sqlite] IS a SQLite db of small size as good as reliable cache?

2016-01-30 Thread Jean-Christophe Deschamps
At 22:57 30/01/2016, you wrote:

>2  further if such a db is part of windows service / application running
>continuously then may be pages will remain in the memory under normal
>Q1) Operating System like Windows would cache the series of pages, is this
>behavior not almost same as that of Cache?
>Q2) Does it make sense to use In-memory SQLite for small size db upto 5MB
>  average size 2MB?

Howard answered the Windows cache part pretty well: don't rely on it.

Now for Q2, I'd use the built-in SQLite backup API to load the DB from 
disk to memory at launch of the service, use it again to save the DB to 
disk at reasonable intervals (depends on how the service is sensitive 
to the D of ACID [durability]) and back again to disk at service 
shutdown. 2 to 5 Mb is nothing now.

Then you don't have to setup a connection SQLite cache of max possible 
size, in-memory DB is all ou need.

Obviously if the machine may be shutdown without notice (e.g. power 
going off unexpectedly) and the context is sensible to durability of 
transactions, then all bets are off for in-memory DB.

[sqlite] Best way to store only date

2016-01-29 Thread Jean-Christophe Deschamps

At 16:23 29/01/2016, you wrote:
>Note that the fractional part in Julian day represents time-of-day 
>since noon, not since midnight.

Ah yes, I often forget about this point. Sorry for incorrect lead.

[sqlite] Best way to store only date

2016-01-29 Thread Jean-Christophe Deschamps
At 08:28 29/01/2016, you wrote:
>For storing a date I probably could use strftime('%J'). But I would 
>like to
>store it as an INT. But I see no way to cast a float to an int. Is 
>that not

select cast(julianday('now') as int)

should do what you want.

[sqlite] whish list for 2016

2016-01-13 Thread Jean-Christophe Deschamps
At 08:28 13/01/2016, you wrote:
>On Wed, Jan 13, 2016 at 2:39 AM, Simon Slavin  
> > On 12 Jan 2016, at 11:56pm, Scott Hess  wrote:
> >
> > > If I am writing a client that can read SQLite databases, then I 
> probably
> > > don't want your database to be injecting a bunch of arbitrary PRAGMA
> > calls
> > > into my client.
> >
> > It is, after all, the equivalent of an autoexecute macro.  And we 
> all know
> > how good an idea that was.
> >
> > Okay, folks, I'm with Scott.  If you want SQLite to execute a bunch of
> > commands when the database is opened write your own replacement for 
> _open_
> > which opens the file then reads commands from a table and executes 
> them.
> >
>You have in my opinion taken this out of context, and are assuming the
>important part is the application, and not the data (i.e. database file).
>In original idea was simply to associate with the database some essential
>settings, which should have been such in the first place and aren't only
>because of legacy reasons, and not arbitrary macros. Keith just got 
>away :) --DD

Exactly! I see a clear difference between settings which are 
*-required-* by the schema and those which are only a convenience or 
comfort for the application(s).

The former type includes what should actually be part of the schema in 
an ideal world:
   foreign keys, recursive triggers, loading of [auto]extensions set 
ON/OFF, ...;
   function extensions and collations, ICU, ...

If some index uses a custom collation which isn't loaded, then queries 
will give erroneous results and not perform DB changes as intended.

If a schema change requires for instance recursive triggers ON, then 
maybe 5, 10 or more applications will need maintainance and 
redistribution. Forgetting update of a single app may silently spoil DB 

Similarly, a single app not setting foreign keys ON when needed may 
again silently destroy DB integrity.


[sqlite] Simple Math Question

2015-10-23 Thread Jean-Christophe Deschamps
AFAICT system 360 machines had BCD in microcode, just like any other 
basic datatype. Z is only the renamed survivor of 360/xx and 370/xxx 

At 14:33 23/10/2015, you wrote:

>On Fri, Oct 23, 2015 at 3:08 AM,  wrote:
> > Computers don't store values in base-10 fractions when they use 
> doubles or
> > floats; they use base-2 fractions. This causes that your simple base-10
> > fractions can't be stored exactly.
>???Unless the architecture implements the newest IEEE-754-2008 "decimal
>floating point"???
>???I, personally, only know of one such machine. And it is not really
>"popular", except with really _huge_ customers: The IBM z series machine.
>It can run 5 different OSes: 4 are IBM proprietary the 5th is Linux. This
>machine is not Intel compatible, in any way, shape, or form. And it is
>super expensive. Which is why it's not "popular". Running Linux, the
>largest has 114 cores and SMT for 228 simultaneous "threads", all in a
>single "box". OK, it is a _big_ box .
> That's about the
>size on an entire 19 inch rack (more or less).
>Also, as Mr. Long indicated, this entire series of machines have a decimal
>data type in the hardware (normally called "packed decimal", which "packs"
>2 decimal digits per bytes, except for the last byte which reserves the
>last nybble for a sign indicator). There are instructions which translate
>from binary to/from packed decimal and packed decimal to/from "printable"
>decimal (called zoned decimal for historical reasons due to old style
>punched cards).
> > This is also why those other -not lite- engines have a decimal type,
> > processed using much slower integer logic coded in their engines... but
> > this doesn't make the current floating point arithmetic broken in 
> any way.
> > Bert
>Schrodinger's backup: The condition of any backup is unknown until a
>restore is attempted.
>Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you 
>will be.
>He's about as useful as a wax frying pan.
>10 to the 12th power microphones = 1 Megaphone
>Maranatha! <><
>John McKown
>sqlite-users mailing list
>sqlite-users at

[sqlite] sqlite3 file as database

2015-09-15 Thread Jean-Christophe Deschamps
At 01:07 15/09/2015, you wrote:
>SQUISH was a database format for storing messages in FidoNet systems.

Geez, I don't even recall my FidoNet node number aka address... Time 
must have flown by faster than I thought.


[sqlite] sqlite3 file as database

2015-09-14 Thread Jean-Christophe Deschamps
I've decided to use .sq3 ; I'm mainly under Windows where a dedicated 
extension is pretty handy to launch a DB manager and sq3 doesn't seem 
to collide with much things around and leaves ample room for sq4, sq5, ...


[sqlite] explain plan change between SQLite and

2015-08-31 Thread Jean-Christophe Deschamps
At 11:58 31/08/2015, you wrote:
>But the only sure way to prevent anyone else from installing its own
>authorizer is to change the function name in the SQLite library.

That wouldn't really work under Windows: GetProcAddress can provide 
access to a DLL function by index (ordinal value) without knowledge of 
its declared name. Also a quick look at the code with any hex editor 
will help finding the changed named.

>you cannot protect against anything that code does.

That's the true bottom line. 

[sqlite] order by not working in combination with random()

2015-08-28 Thread Jean-Christophe Deschamps
At 09:14 28/08/2015, you wrote:
> > Looks like "How many zillion devices are going to misbehave if this 
> is fixed?"
>We will find out, because SQLite was changed two days ago:

Fine, let's sit down and watch the world collapse. No, that won't happen.


[sqlite] autoincrement field

2015-08-28 Thread Jean-Christophe Deschamps
At 23:25 27/08/2015, you wrote:
>I have a table structure like this:
>CREATE TABLE padstack (
> pin_number INTEGER,
> name TEXT
>Is there any way to get the 'id' of newly inserted row? My insert of 
>not contains the 'id' field.
 >--- is what you need. 

[sqlite] order by not working in combination with random()

2015-08-28 Thread Jean-Christophe Deschamps
At 21:11 27/08/2015, you wrote:

> > There are 2 distinct and volontary function invokations, so I don't see
> > how SQL engine would decide not to perform the second call.
>Agreed, though I'm pretty sure I've read messages in this thread at
>advocate the same function should return the same value when called
>multiple times in a single select statement.

While this does make sense for datetime functions, it doesn't really 
prove helpful with especially random(), but it turns to nonsense for 
generic user-defined function as long as SQL doesn't have the faintest 
clue whet it does. That means external function, not TSQL or friends.

>Using I'm testing the following:
>create table blah(a int);
>insert into blah values(1),(2),(3);
>select rand(), rand() as b from blah order by b;
>where rand() is whatever random number function the platform in question

Various engines may find it clever to treat the engine's rand() in a 
particular way. But the real question is: how do they process 
user-defined external function fct()?

>SQL Server 2008: same row three times; rand() is computed at the beginning
>of the query and cached. Which makes for decidedly non-random numbers and
>doesn't seem very useful IMO.

Don't trick me into commenting microsoftisms ;-)

>MySQL 5.6 returns three different rows, and it seems to be sorted on 
>b. Six
>invocations of the query always resulted in the set being sorted on b, so
>I'm going to assume it worked as expected without generating a third 

>PostgreSQL 9.3: same result as MySQL 5.6.
>Oracle 11g R2: same result as MySQL 5.6.

Sounds logical.

>I think it still comes back to my earlier comment: Would changing it to
>behave more like the most common / expected outcome above be a breaking

Looks like "How many zillion devices are going to misbehave if this is 
fixed?" vs. "Now developpers will be certain that such constructs don't 
produce non-sensical results any more and will stop using 
application-level kludges instead".

I would give the second possibility a chance because the issue seem to 
have previously surfaced only about datetime functions and indeed the 
construct is not the most common. The mere fact that PG, Oracle and 
MySQL converge on this point makes that the most sensible thing to do. 
But who am I? 

[sqlite] order by not working in combination with random()

2015-08-27 Thread Jean-Christophe Deschamps

>I can see both sides of this debate, whether or not random() should be
>evaluated twice in this context:
>select random() from blah order by random()

There are 2 distinct and volontary function invokations, so I don't see 
how SQL engine would decide not to perform the second call.

>So let me pose a question. What should the following query generate?
>select random(), random() from blah order by random()

Ditto, but with now 3 function direct invokations, hence in the case of 
random(), 3 probably distinct values. That a more recent post shows 
that SQLite issues an error about ambiguousness of random() is even 
more worrisome!

This is completely different from refering to an *-already computed 
value-* like in
select random() rr where rr < 0 or rr >= 0 group by rr order by rr

Here: only 1 function invokation and multiple references to an already 
computed column.
What must occur here is (paraphrased):

create temp table tmp as select random() rr;
select * from tmp where rr < 0 or rr >= 0 group by rr order by rr; -- 
providing the result set
drop table tmp;

Now let's broaden the picture: if rr is in fact a large, complex 
subquery, taking 45 hours to compute, then should "order by rr" rerun 
the subquery for every row comparison? Sounds ridiculous to me.

I don't know SQLite code enough to start digging but I doubt that 
having, where, group by and order clauses to have any right and need to 
introspection into the column definition and private interpretation of 
what's there, lest re-running what produced them. This is none of their 

[sqlite] order by not working in combination with random()

2015-08-27 Thread Jean-Christophe Deschamps
At 16:00 27/08/2015, you wrote:
>An *ORDER BY* clause in SQL  specifies
>that a SQL SELECT  statement
>returns a result set  with the
>rows being sorted by the values of one or more columns. The sort criteria
>do not have to be included in the result set. The sort criteria can be
>expressions, including ? but not limited to ? column names, user-defined
>functions , 
>operations, or CASE expressions. The expressions are evaluated and the
>results are used for the sorting, i.e. the values stored in the column or
>the results of the function call.

Pretty clear until now.

>So, applying this to our subject, "rr" is _NOT_ the name a column in the

Yes: it's a temporary schema name.

>  It is an alias to a "user-defined function" (user-defined in this
>case meaning not defined in the SQL standard itself).

You're bending the meaning above. rr is not a C preprocessor define nor 
a C++ function reference, it is an alias to a column, i.e. a schema 
name, not an invokation of the computation that took place for creating 
the value in the column beforehand.

>  Therefore, based on
>the last sentence of the above quote, the random() is evaluated AT THE 

That would be a bug (like it currently is). Like Scott Hess just said 
shows that it isn't possible to perform a sort using this 
interpretation. Let's forget random() and use instead a user function 
inc() which initializes an integer sequence when fed an integral 
parameter and returns the next integer every time it's invoke without 

Then "select inc() col order by col" wouldn't return the correct value. 
Granted that the order by clause isn't of much use here but if ever 
such construct is used to assign increasing reference numbers in step=1 
to items in a stock, then you're going to throw SQLite away.

Worst, say a user function new_uuid() returns new unique IDs and also 
stores them elsewhere (e.g. by communication with the underlying 
application) you'll end up with diverging lists of "used" uuids.

The fact that this example delivers a result clearly violating the 
where clause (and also the order by if you put "order by rr") 
demonstrates that this interpretation doesn't hold water.

drop table if exists mytable;
drop view if exists myview;
INSERT INTO myTable VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9);
CREATE VIEW myView AS SELECT a,random()%100 AS rr FROM myTable;
SELECT rr FROM myView WHERE rr < 30;

The wrong behavior contradicts SQLite own documentation, and especially the second 
paragraph below

>Each ORDER BY expression is processed as follows:
>* If the ORDER BY expression is a constant integer K then the 
> expression is considered an alias for the K-th column of the result 
> set (columns are numbered from left to right starting with 1).
>* If the ORDER BY expression is an identifier that corresponds to 
> the alias of one of the output columns, then the expression is 
> considered an alias for that column.
>* Otherwise, if the ORDER BY expression is any other expression, 
> it is evaluated and the returned value used to order the output rows. 
> If the SELECT statement is a simple SELECT, then an ORDER BY may 
> contain any arbitrary expressions. However, if the SELECT is a 
> compound SELECT, then ORDER BY expressions that are not aliases to 
> output columns must be exactly the same as an expression used as an 
> output column.


[sqlite] Why sqlite show qualified column names when selecting from views ?

2015-08-25 Thread Jean-Christophe Deschamps
At 22:28 25/08/2015, you wrote:

>In the next version of SQLite3 the names of all columns will be 

"Don't trust me" could be more apropriate. 

[sqlite] Proper way to abort

2015-08-24 Thread Jean-Christophe Deschamps

>I have some queries that may take 5-15 seconds to complete. Sometimes 
>the situation changes shortly after starting the query where my 
>program does not need those results anymore and the program wants to 
>abort and begin a different query instead.
>My question is: What is the proper way to abort a query, or other 
>operation during execution that will not cause any issues? By issues I 
>mean files not being closed, or memory not being free'd and such 
>because the operation was aborted and did not have finish normall, but 
>I can continue normally after the abort..


[sqlite] Database sybchronisation

2015-08-13 Thread Jean-Christophe Deschamps

>This is normally resolved by logging the time each command was issued

Not "when it was issued" but when exactly it was finally commited. When 
one or more update statements are part of a transcation, you now have 
to use sqlite3_profile to log when exactly the transaction ended (i.e. 
can be considered actually stored). That implies both sqlite3_trace and 
sqlite3_profile need to be monitored to keep track of what constitutes 
an update transaction and when precisely it was commited.

If machines A and B (or more) run independantly, I don't believe you 
need a central "untouched" copy: you can very well carefully merge logs 
and determine for each machine what change took place after every of 
its own updates for every row in every table.

Another tough issue is how to re-sync machine times down to a very 
small time window in a common time frame. Whatever small "vulnerable" 
time window you achieve, you can be assured that it will cause in a 
problem someday, resulting in updates merged out of real-world order.

Note that merging can also violate constraints in the merged database. 
Say a rule (enforced by constraint or at applicative level) limits the 
number, volume or nature of the updates permitted in a given timeframe, 
you have no guarantee that this rule will be enforced if independant 
(albeit individually conformal) changes are made but only later merged. 
Several changes can very well have taken place independantly without 
breaking local rules (to the knowledge of the local machine), but 
discover only at merge time that some of the last operations would have 
been rejected if they have been submitted to a centralized machine/DB.

For instance you have an export restriction (available stock or 
manufacturing capability) which allows no more than N given items XYZ 
to be exported (or sold) per day/week/month. Even if independant 
machines+DBs don't violate the rule from their own point of view, the 
merged dataset may be in violation of the limit(s) or simply rejected 
by SQL constraints at merge time.

While I agree that this is not exactly in the realm of raw data merge 
in the right order, this still is a serious problem to deal with, not 
solvable with such simple merging operations. 

[sqlite] Database sybchronisation

2015-08-13 Thread Jean-Christophe Deschamps
Hi Simon,

>Method 2
>Ignore Method 1.  In each copy of your database keep a log of all 
>INSERT and UPDATE commands executed since the last 'synchronize':
>CREATE TABLE commandsSinceLastSynch (theCommand TEXT)
>To synchronise the two copies, play back the log for copy A to copy B 
>and the log for copy B to copy A.

Things are not that simple: you need to handle the case where a row has 
been updated in both DBs. Comparing update commit (important!) 
timestamps in both logs helps determine which one got first/last. Of 
course if triggers changed more and timestamps are close enough then 
FUD enters the room.

[sqlite] CSV excel import

2015-08-02 Thread Jean-Christophe Deschamps
At 18:38 01/08/2015, you wrote:

>Nobody mentions it because it is as irrelevant as bemoaning the fact 
>that CSV cannot store lawn-chairs or Java objects. It wasn't intended 
>to do so.

Exactly. All I mean is that with only very few additional strict rules 
it can be changed into a basic type compliant vehicle able to reliably 
transfer data between many applications, including SQLite. A 
spreadsheet internally differentiates between a string, a number and an 
empty cell, for every cell, so the text output format should clearly do 
the same, unambiguously IMHO.

>   Neither, for that matter, does it store Integers or Reals as you go 
> on to mention - It is completely typeless (moreso than SQLite).

You know very well that SQLite is far from being typeless. What you put 
in you get out thru the kaleidoscope of affinities. Its storage classes 
are perfectly defined and obey precise rules.

>  It stores only one single thing: Strings. It has only one single 
> guide: How to correctly add /the string/ to a row and column and how 
> to read it back. How you interpret those strung-together characters 
> is up to the composer/decomposer (as Simon mentioned) - the CSV 
> standard has no feelings about it.

True, but the issue with most variants of CSV files floating around is 
just that: it's the reader to decide if 12345.4890E-13 is a float or a 
string (for instance the reference of some item by a given supplier.) 
By forcing string delimiters, you gain at least SQLite type resilience 
in/out. If you follow the RFC by the letter and only use string 
delimiters when a string actually contains the delimiter, you loose the 
capacity to unambiguously determine basic types.

By adopting a set of simple rules, you can reliably import/export data 
blindly into/from SQL (and outside as well) without loosing basic type 
information even at the individual field level. Of course here I mean 
all SQLite datatypes, which is what we're talking about in this list.

>For extra fun - How must a value that are both in and not in quotes be 
>interpreted? i.e if I have this csv line, what values must the parser 
>end up with?:
>1, "2", "3" 4, 5 "6", 7
  ^ ??? (fixed font required)
An error there: this isn't valid CSV under all variants I know of. At 
the limit, the grammar can be enhanced (or bastardized?) to allow for 4 
being considered a comment, but I never had any incentive to handle 
this kind of cosmetic variation. Even in that case, then "6" should 
also being seen as a comment but this is going far beyond what I (and 
most people) need and routinely use.

>i.e. You've made your own file specification using the CSV standard as 
>a kick-off point.

Exactly: I don't pretend to have invented warm water, just decided for 
an easy to implement variant which fixes most of the issues with CSV 
being initially type-blind.

Additionally, I enforce UTF8 no BOM encoding and allow unsignificant 
Unicode horizontal whitespaces before and after field separators as 
well as at begin and end of line.

Obviously if one needs to use a format offering more complete 
semantics, then JSON or XML are there for use, albeit at significantly 
higher cost.

[sqlite] CSV excel import

2015-08-01 Thread Jean-Christophe Deschamps

At 16:36 01/08/2015, Igor wrote:

>There are many real problems with CSV - no need to make up imaginary ones.

Indeed and I'm surprised noone mentionned this from the very start of 
this thread: Nulls. There is no real provision in the RFC to represent 
Null, or distinguish Null from an empty string.

I've long established my own convention to transfer data between the 
language I use most (AutoIt), CSV files and SQLite and I'm very happy 
with that.

Let's look at which datatypes we have to deal with, at least for SQLite:
   o) string
   o) integer
   o) real
   o) binary
   o) Null   <-- that's very important and most often forgoten!

Within my humble convention, a CSV field is either:
   o) a string delimited by a parametrable delimiter (double quote by 
default) where embedded delimiters are escaped by doubling
   o) an integer, optionally signed
   o) a real (e.g. 3.1415926 or -5.6e-12)
   o) a binary in 0x0123456789abcdef form
   o) an empty field or a field containing Null (w/o delimiter) 
represents Null

Fields are separated by separators (comma by default).
Rows are terminated by any Unicode line termination (defined by PCRE 
An optional header may be present with column names as a series of 
string types.

That deals with SQLite datatypes. I've additionnally accept datatypes 
from/to AutoIt to/from SQLite (some of them don't make much sense 
storing in a DB, but anyway):
   o) a field containing True or False (w/o delimiter) represents 1 or 
0 as a boolean
   o) a field containing "<-~Default~->" represents the keyword Default
   o) function references are encoded as strings in the format 
   o) pointers are encoded as strings in the format "<-*0123456789"*->"
   o) Windows handles are encoded in the form "<-&0123456789&->"
   o) a C-style structure is encoded by "<-{0123456789ABCDEF}->"
and the following more exotic ones (their actuel content is not stored):
   o) a COM object is denoted by the format "<- at objectName@->"
   o) an array is denoted by the format "<-[3][4]...[2]->"
   o) all other unrepresentable variants are denoted by "<-?variantType?->"

I agree that the convention of using strings like "<- ... ->" is 
questionable and is merely a hint that some variable(s) should have 
been handled otherwise, but if one focuses only on SQLite (and Excel or 
such) datatypes, the format is definitely unambiguous and easily parsed 
by a single regexp and encoded with minimum effort.
The benefit of representing datatypes unambiguously in round-trip 
operations should be obvious.

Granted that doesn't solve the direct import from Excel CSV or some 
other data source using its own "standard", but I prefer having to code 
the few lines of AutoIt needed to grab data directly from Excel or 
other COM objects and format the data according to my own convention 
than rely on variable formats found around.

Again, the issue of not providing a clear way to represent SQL Null is 
a big hole in the RFC (I understand it was not the purpose at the time 
it was issued and the fact that it was too late to enforce something 
better). Also the optional string delimiter makes it essentially 
impossible to distinguish the integer value 1234 from the string 
"0001234", another issue that needs addressing. 

[sqlite] Mozilla wiki 'avoid SQLite'

2015-06-17 Thread Jean-Christophe Deschamps

>There's never been that kind of tone, the article has been written to 
>new Mozilla codebase contributors at possible pitfalls we already hit in
>the past, and actually help them making informed decisions.

That's not how one reads it. Start with only the title: 
"Performance/Avoid SQLite In Your Next Firefox Feature"

Do you read with me? "If you want performance at any rate, AVOID SQLite 

Then it starts enumerating pretended "SQLite pitfalls", pointing out 
several topics which are presented as inherent drawbacks to using 
SQLite from a FF pluggin.

This is not "help them making their decision", just simply bashing the 
library which you seem to expect being almost reserved to core code.

>It actually briefly explains in which cases a JSON log can work better

Starting at ~1Mb data it advises storing stuff as compressed JSON. Now 
you're seriously trying to tell me that with a multi-Mb compressed 
complex JSON:

-) locking the compressed JSON file
-) reading it up in full, in memory
-) unpacking it (with much more memory consumption, of course)
-) parsing it (again more memory workspace needed)
-) scanning the object tree in search of the data you're after
-) reading and changing a single value in the tree
-) flattening the whole tree back to JSON form
-) repacking it
-) rewriting the lot
-) releasing the lock

ALL of this sequence (in a multi-threaded environment with all the 
extra burden that comes with it) is going to take less time, waste less 
CPU cycles and use less RAM than a single SQLite UPDATE wrapped in a 
safe-to-use API, all in a smaller "footprint"? Seriously? And what does 
happen when two or more compressed JSON have to be used together to 
obtain/access the required information? Looks like a can or worms in 
this case.

Note how the footprint argument is hypocritical since the SQLite 
library is already loaded and ready for work since it's much in use for 
the main thread, so the footprint is always there, that SQLite be used 
in pluggins or not. Any decently written safe wrapping API is 
ridiculously small and doesn't even count.

I don't seem the only one to be utterly doubtful about this "compressed 
JSON > SQLite" assertion, when applied to the general case.

>It tries to make people think before doing.

Yes a simple flat file can be the best solution in simple cases but 
that shouldn't trigger the AVOID verb in general cases. And less simple 
flat files can fit the bill for another range of use cases.

It would be wise to shape the arguments (a number of them being valid) 
in a different way.

>It states to evaluate alternatives. Existing alternatives like OS.File
>writeAtomic and JSON. Those are already used with success in both the
>products and add-ons, when it makes sense.

The last part of your sentence, "when it makes sense", clashes with the 
title (AVOID). Avoiding is not evaluating.

>Sure the developer mus think to durability, backups, coherence, but 
>would it not be the same if he'd use SQLite?

Definitely not, because you know that SQLite already takes greatest 
care of all the possible issues on countless platforms and this comes 
free: the code is already there, highly optimized and working. Code 
written by "average" pluggin developper can't compete with years of 
experience with those issues which are prone to overlook. Whenever a 
dark corner resulting in a bug in SQLite surfaces, the fix comes in no 
time and fixes all applications using SQLite at a time.

> > After all, it's Mozilla devs themselves who designed pluggin APIs 
> and let
> > "spurious main-thread SQL statements" be possible. If they were sooo
> > clever, they would never had allowed that and they also would have 
> wrapped
> > SQLite interface in a strictly limited set of rules enforced by a 
> safe API.
> > That, they won't tell you.
>And they'd have a 100% bug free product with millions lines of code... No,
>that's unrealistic.

Great! So I've news for you: you can produce proven correct *-code-* 
using today's formal methods tools. Granted it would actually be 
irrealistic to dream of switching to a formal development process 
within months, even years, but such things do exist. Yet I never 
pretended that offering a safer API would solve all of the issues, just 
a number of those the text is about.

>  The Mozilla codebase comes from the 90s Netscape
>codebase, at that time the most common thing was a single-threaded and
>single-process browser, able to show text, some images and tables. The
>reality evolved A LOT and the code had to evolve to cope with it. At the
>time mozStorage (the SQLite wrapper) was written, there was still that 
>of vision, and it was written as a main-thread synchronous API. Sure, now
>we all know it was wrong, but at that time it was the right-ish thing to
>do. The API grew a purely asynchronous alternative, but when you have
>hundreds millions of users and thousands of add-ons using an API, you 
>just say "sorry, we now break you 

[sqlite] Mozilla wiki 'avoid SQLite'

2015-06-16 Thread Jean-Christophe Deschamps
I've refrained to comment about the OP linked page but I can't resist 
that long. I won't enter the C, C--, C++, C#, Java, Python, COBOL, 
Basic, assembler, Ruby, YouNameIt sub-debate.

I found the rant on MozillaWiki way too wrong on too many points to 
keep quiet.

While I can agree with some of the most obvious "don't be dumb" 
remarks, there are many things that can't be let unchallenged. And I 
don't appreciate the overall tone: "WE at MozillaBigBalls are all 
clever enough to use SQLite smartly but you mere mortals are not, so 
don't even think to use that fragile piece of softawre."

In it's introduction, the blurb talks about SQLite hidden complexity. 
Then it avdises Mozilla pluggin developers to avoid SQLite at almost 
any rate, due to "performance reasons", and recommends compressed 
JSON/logs instead.

That's very odd, to say the least.

All regular readers of this list have seen a number of threads where 
the intricacies of how hard it is to guarantee both some concurrency 
and ACID properties down to magnetic surface or chips' gates is 
difficult in the general case.

You all know that SQLite actually uses all possible steps to bring this 
guarantee the closest possible to reality. Yet this guy(s) advocate 
that *-every-* pluggin devs should independently roll his own storage 
layer mostly from scratch, while pointing out the unstabilityor 
uncertainty associated with  mobile devices processes/OSes. That is 
nothing but calling for (1) huge number of new entertaining bugs (2) at 
best, duplicate efforts to make that work in a highly multi-threaded 
environment where concurrency will need to be dealt with with greatest 

No, the "average" pluggin dev won't be able to come up in minutes with 
a rock solid storage layer portable to a myriad of platforms. And this 
is precisely what SQLite has been thru over years.

As has already been pointed out, storing largish private data in 
compressed JSON or log file(s) will be (beyond often being a greater 
resource hog) a real nightmare when several tabs/windows will possibly 
need to concurrently read/modify/write stuff in there. Writing rants 
about SQLite being a resource hog and a performance breaker is one 
thing, guiding towards a clean way to replace it in practice is another 
(mystical) beast, at least for the average pluggin developper.

Then another question remains: instead of putting the onus on SQLite 
being huge (footprint) and slow (CPU, the 22s "example"), why don't the 
author(s) of the blurb question the real root causes of the evil they 
condemn and openly recognize that the problem lies entirely elsewhere, 
perhaps in Mozilla core code design itself?

They almost silently agree that allowing main-thread SQL access is a 
huge no-no. They recognize that allowing unlimited use of various 
(possibly conflicting) pragmas, random schemas and/or statements could 
harm. But what those hypocritical guys don't tell you is that the root 
cause is in the cahotic spiralling development history of Mozilla.

After all, it's Mozilla devs themselves who designed pluggin APIs and 
let "spurious main-thread SQL statements" be possible. If they were 
sooo clever, they would never had allowed that and they also would have 
wrapped SQLite interface in a strictly limited set of rules enforced by 
a safe API. That, they won't tell you.

Also, if Mozilla devs were sooo much more clever than average Joe and 
sooo caring about performance, they certainly would have fixed the 
hundreds of memory leaks that plague FF users (at least on Windows and 
for almost a decade) and they would have "spring cleaned" their messy 
codebase so that one can't see JS fragments kept running after the FF 
tabs which launched it was closed and blatant bugs like these. Watch 
memory bytes slowy growing toward 2Gb while FF is left "idle" and the 
RJ45 unplugged, without any pluggin set up... Is that the fault of 
pluggin devs, Dr. Hipp laziness, or the result of their own careless work?

With today's FF W7 x64 (ditto for x86) release, you can't let it have, 
say, 30 tabs open 24/7 on "passive" forum pages (no ads, no sound, no 
video, nothing dynamic) for more than 2-3 weeks on the average. Then 
the machine gets so slow and unresponsive that you can only kill FF and 
restart it. At the time it has reopen and refreshed all the tabs, you 
witness its memory bytes now sit at 600 Mb (compared to ~2Gb before), 
with everything as functional as before killing it. Is that also SQLite 
fault or is that a sad joke?

These are things that you don't see with either IE or Chrome (each of 
them having their own drawbacks too).

In short and beyond a few trivial advises, the authors of this rant are 
either surprisingly ignorant or utterly hypocritical, but unfair at any 

I've nothing against Mozilla per se and I sincerely acknowledge that 
maintaining such a huge open product is very hard, but this page is 
simply plain wrong.

[sqlite] User-defined types -- in Andl

2015-06-09 Thread Jean-Christophe Deschamps
At 13:50 09/06/2015, you wrote:

>BTW I don't remember the last time I saw SQL like this. Understanding it
>might be the challenge

Most probably! I can imagine that you don't encounter such style in 
common business-like environments.

Take your time, this SQL piece is clearly beyond normal human 
understanding. Perhaps getting in touch with the author could help. 
Everyone could possibly benefit of innovative views and avenues.


[sqlite] User-defined types -- in Andl

2015-06-09 Thread Jean-Christophe Deschamps
At 08:27 09/06/2015, you wrote:

>Andl is at a slightly higher level than SQL for writing simple queries.
>Where it shines is writing complex queries that involve user-defined 
>custom transformations and custom aggregations. For complex relational
>operations there is nothing I know that can come close, productivity 

You call for complexity, you get it!

What I would find pretty convincing is seeing how andl would translate 
the self-contained sudoku solver posted by E. Pasma in this post:

Granted, this is far from the typical SQL you can find in routine use, 
but I believe that andl being able to elegantly translate it would 
certainly impress a number of readers and make many of us more 
interested in digging further.

I'm not throwing such a mayhem challenge at you in the tone of "if andl 
can't do that, then shut up". Maybe andl is not yet complete enough 
today to achieve that and this wouldn't be a big issue. But if it can 
I'm sure andl will attract more attention.



2015-05-21 Thread Jean-Christophe Deschamps
At 08:51 21/05/2015, you wrote:

>I used sqlite3.exe by invoking it on the command line with the name of 
>a database.
>Next I typed
>  alter table database add column real_length numeric;
>Next I typed .fullschema
>and I'm getting:
>sqlite> .fullschema
>CREATE TABLE [database] (
>[database_name] TEXT  NULL,
>[table_name] TEXT  NULL,
>[data_type] TEXT  NULL,
>[data_size] INTEGER  NULL,
>[column_name] TEXT  NULL,
>, real_length numeric);
>/* No STAT tables available */
>Now I wonder why I don't read
>CREATE TABLE [database] (
>[database_name] TEXT  NULL,
>[table_name] TEXT  NULL,
>[data_type] TEXT  NULL,
>[data_size] INTEGER  NULL,
>[column_name] TEXT  NULL,
>[real_length] NUMERIC;
   ^ missing )

Probably because you used a third-party tool (like SQLite Expert) to 
create the table. Such tools often --and I know Expert does-- enclose 
schema names in a given pair of delimiters [...], "..." or `...` while 
the CLI simply issues the SQL command like you input it. 

[sqlite] AUTOINC vs. UUIDs

2015-05-21 Thread Jean-Christophe Deschamps
At 01:33 21/05/2015, you wrote:

>HUIDs have the birthday problem.  How many people do you need to have 
>in a group before two of them will have the same birthday?  HUIDs can 
>only be called LUID's (Locally Unique Identifiers) if you go to the 
>trouble of ensuring uniqueness *before* using one.  Validating Global 
>uniqueness is, quite obviously, a very difficult problem.
> > -Original Message-
> > From: sqlite-users-bounces at 
> [mailto:sqlite-users-
> > bounces at] On Behalf Of Jean-Christophe Deschamps
> > Sent: Wednesday, 20 May, 2015 15:38
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] AUTOINC vs. UUIDs
> >
> > At 23:24 20/05/2015, you wrote:
> >
> > >On Wed, May 20, 2015 at 2:20 PM, Steven M. McNeese
> > > wrote:
> > > > Often times people will use GUIDs as primary keys when different
> > > systems need to generate
> > > > rows and then merge together. Like an application that works with
> > > offline clients that push the
> > > > data to a server when the connect. However there are other ways of
> > > accomplishing the same thing.
> > >
> > >For curiosity - Is there a site/blog post somewhere
> > >enumerating/listing these other ways ?
> >
> > I don't know, but let's say your rowids range from -9223372036854775807
> > to 9223372036854775807, that's 18446744073709551614 possible rowids.
> >
> > Imagine that in 50 years, the total population on Earth will grow to
> > (say) 50 billion people (I hope it won't!).
> >
> > 18 446 744 073 709 551 614 / 50 000 000 000 = 368 934 881.474 191 
> 032 28
> >
> > That leaves you the possibility to assign a unique identifier to every
> > potential client on the planet (and a big one) in nearly 369 million
> > servers concurrently without the faintest risk of collision.
> >
> > At this rate, you may limit rowids to only positive integers ... and
> > hire a large army of telemarketers.
> >

I didn't advocate for HUIDs, at all!

My non-overlapping split range of rowids spread over N devices is 
provably immune to the birthday issue. This partitionning of rowids' 
range is equivalent (but simpler) to giving each device a globally 
unique prefix.

And within a few years it's quite possible that the typical int will be 
_int128, allowing this scheme to market any item over the entire galaxy. 

[sqlite] AUTOINC vs. UUIDs

2015-05-21 Thread Jean-Christophe Deschamps
At 23:24 20/05/2015, you wrote:

>On Wed, May 20, 2015 at 2:20 PM, Steven M. McNeese
> wrote:
> > Often times people will use GUIDs as primary keys when different 
> systems need to generate
> > rows and then merge together. Like an application that works with 
> offline clients that push the
> > data to a server when the connect. However there are other ways of 
> accomplishing the same thing.
>For curiosity - Is there a site/blog post somewhere
>enumerating/listing these other ways ?

I don't know, but let's say your rowids range from -9223372036854775807 
to 9223372036854775807, that's 18446744073709551614 possible rowids.

Imagine that in 50 years, the total population on Earth will grow to 
(say) 50 billion people (I hope it won't!).

18 446 744 073 709 551 614 / 50 000 000 000 = 368 934 881.474 191 032 28

That leaves you the possibility to assign a unique identifier to every 
potential client on the planet (and a big one) in nearly 369 million 
servers concurrently without the faintest risk of collision.

At this rate, you may limit rowids to only positive integers ... and 
hire a large army of telemarketers. 

[sqlite] AUTOINC vs. UUIDs

2015-05-21 Thread Jean-Christophe Deschamps
At 22:50 20/05/2015, you wrote:

>Just to chime in here: I think using UUIDs for primary keys is an 
>terrible idea, instead. First off, how are you going to generate them? How
>will you avoid collisions? Why should it be necessary to using that much
>storage space for a primary key and what's the reason for not using
>blazingly-fast integers?...

Agreed. Also the article denounces the "information leakage" introduced 
by exposing rowids to applications, but keeps quiet about a significant 
drawback: replacing those integers typically shown in the leftmost 
column of a grid (which don't forcibly expose meaningful information) 
by UUIDs is going to eat a very significant screen space if used instead.

Finaly I can see many people easily remembering 80486 as an identifier 
of  but almost none able to do the same with their 
c0b656b1-7351-4dc2-84c8-62a2afb41e66 example UUID.

Granted the risk of accidental collision is minimal but still 
non-zero.  Yet their (careful) creation and, moreover, the repeated 
conversions from 128-bit blob to split hex ASCII for human 
"convenience" are a useless waste of cycles.

To those who say: "UUIDs don't have to be displayed to users", then why 
complain that INTEGER PRIMARY KEY leak information if they are not 
displayed either?


[sqlite] Fwd: bulk upload from Excel

2015-05-11 Thread Jean-Christophe Deschamps

>-- Forwarded message --
>From: "maksood alam" <786maksood at>
>Date: May 11, 2015 11:21 AM
>Subject: bulk upload from Excel
>Hi Team,
>Do we have any option like below for the bulk upload.
>INSERT INTO tbl_test1 SELECT * FROM [asd$] IN '" &
>ThisWorkbook.FullName & "' 'Excel 8.0;'"

You could use AutoIt (freeware) which offers support for both Excel and 

This is a basic-type Windows scripting language with a load of features 
to ease automation up to create real-world applications.


[sqlite] VBA Sqllite blob data

2015-05-11 Thread Jean-Christophe Deschamps
Hi Simon,

>On 11 May 2015, at 3:25pm, Jean-Christophe Deschamps 
> wrote:
> > Isn't that the actual size limit of Excel strings?
>Sort of.  Although Excel cells can contain text longer than 255 
>characters, VBA functions ignore text parameters after the 255th 
>character.  This is because VBA, when originally written, used 
>Pascal-type strings internally.

Yes, I meant "using either VBA or COM objects". People regularly 
complain that they hit this limit when using Excel with AutoIt (the 
communication uses COM).


[sqlite] VBA Sqllite blob data

2015-05-11 Thread Jean-Christophe Deschamps
Isn't that the actual size limit of Excel strings?

At 15:27 11/05/2015, you wrote:

>I have a db3 config file that contains several different records. I 
>need to select one specific record and store it in an excel worksheet 
>cell. I have been able to create a routine to extract the information 
>that I need once I get the record into a worksheet cell. The problem 
>that I have run into is the record is longer than 255 characters and 
>gets truncated at that limit.
>-Original Message-
>From: sqlite-users-bounces at 
>[mailto:sqlite-users-bounces at] On Behalf Of 
>Bart Smissaert
>Sent: Friday, May 08, 2015 5:23 PM
>To: General Discussion of SQLite Database
>Subject: Re: [sqlite] VBA Sqllite blob data
>What do you mean with: into Excel? Into a cell in the worksheet, into 
>a VBA variable?
>What code did you try? I never use blobs, but I don't think it should 
>be a problem.
>On Fri, May 8, 2015 at 8:15 PM, Preston King  
> > Does anyone have an example of how to read sqlite blob records, that
> > are not pictures, into Excel? I have been trying to find some VBA code
> > to do this but am not having much luck. Thanks
> > ___
> > sqlite-users mailing list
> > sqlite-users at
> >
> >
>sqlite-users mailing list
>sqlite-users at
>sqlite-users mailing list
>sqlite-users at

[sqlite] sqlite3 and Unicode

2015-05-05 Thread Jean-Christophe Deschamps
At 09:22 05/05/2015, you wrote:

>On 4-5-2015 20:54, Richard Hipp wrote:
>>On 5/4/15, Peter Haworth  wrote:
>>>When using the .dump command with .output to a filename, what 
>>>encoding does
>>>sqlite3 for the file? Same as the database encoding?  Is it possible to
>>>change whatever encoding is used?
>>>Similarly, when using the .import command, does sqlite3 assume the input
>>>file is the same encoding as the database?
>>I think it assumes UTF8 in both cases.
>when using '.dump', it does not create a UTF8 file on Windows 7 
>(sQlite version

It's UTF16-LE data.

jcd at  

[sqlite] With recursive question

2015-03-03 Thread Jean-Christophe Deschamps
At 04:05 01/03/2015, you wrote:

>On 2/28/2015 7:22 PM, Jean-Christophe Deschamps wrote:
>>Once again thank you very much Igor. I was making my life miserable
>>trying to scan the "tree" from the other end.
>That, too, could be arranged. Something along these lines:
>with recursive FileDirs as (
>   select FileId, FileDirID ancestor, '' as path from Files
>union all
>   select FileId, ParentID, '/' || DirName || path
>   from FileDirs join Dirs on (ancestor = DirId)
>   where DirID != 0
>select FileId, FileName, path
>from FileDirs join Files using (FileId)
>where ancestor = 0;

Thanks again Igor for turning on the light!

<mailto:jcd at>jcd at  

[sqlite] full table scan ignores PK sort order?

2015-03-03 Thread Jean-Christophe Deschamps
At 11:27 03/03/2015, you wrote:

>- the full table scan returns rows in rowID order, which is the order 
>in which the rows were added to the table

No and no.

An SQL engine doesn't guarantee any row "order" unless you explicitely 
force an ORDER BY clause. Think of row order as random, where rowid 
order is just a possibility among zillions others. Of course neither 
SQLite nor other engines willingly use random() to foil your 
expectations but you should never rely on such an implementation detail.

Also rowids are technically independant of insertion order: you may 
feed any valid random literal rowids at insert time.

jcd at  

[sqlite] With recursive question

2015-03-01 Thread Jean-Christophe Deschamps
Once again thank you very much Igor. I was making my life miserable 
trying to scan the "tree" from the other end.

At 00:37 01/03/2015, you wrote:

>It's a bit unfortunate that you made the root a parent of itself. 
>Forces the query to make an extra check to avoid infinite recursion.

Yeah I know it was bad, it was only a quick & dirty sample. My use case 
is pretty more complex but your advice works like a charm, which isn't 

Without abusing your time and patience, do you see a simple solution 
more fitted to a case where there are zillions "directories" at varying 
depths but only very few "files", that is without listing all the 
possible paths first and finally matching them the the files' directories?

[sqlite] With recursive question (addendum)

2015-03-01 Thread Jean-Christophe Deschamps
Forgot to mention that I sees the issue as fundamentally distinct from 
the Mandelbrot example in the SQLite docs: it uses a construct 
-outlined in procedural statements- like:

For x in xRange
   For y in yRange
 compute something like f(x, y) -- here, y is independant of x
   Next y
Next x

Put in table form:
build a table for varying x's
build a table for varying y's
compute f(x, y) for the cross join

My issue is that yRange depends on the current x for its starting 
point, so in SQL WITH parlance, I can't have a table x built before 
entering the computation.

It must be simple but I lamentably fail to see the light.

[sqlite] With recursive question

2015-03-01 Thread Jean-Christophe Deschamps
Dear list,

After trying a number of ways I'm at loss solving the seemingly simple 

For a simplified example say I have a list of individual filesystem 
directories with FK pointing to their parent:

PRAGMA foreign_keys=ON;
   "DirName" CHAR,
INSERT INTO "Dirs" VALUES(0,'root',0);
INSERT INTO "Dirs" VALUES(4,'A1',1);
INSERT INTO "Dirs" VALUES(5,'A2',1);
INSERT INTO "Dirs" VALUES(6,'A3',1);
INSERT INTO "Dirs" VALUES(7,'B1',2);
INSERT INTO "Dirs" VALUES(8,'B2',2);
INSERT INTO "Dirs" VALUES(9,'C1',3);
INSERT INTO "Dirs" VALUES(10,'X1',3);
INSERT INTO "Dirs" VALUES(11,'Y1',10);
INSERT INTO "Dirs" VALUES(12,'Z1',11);

The depth of directories is (essentially) unbounded, just like is 
actual filesystems.

I also have a list of files with FK pointing to their hosting directory:

   "FileName" CHAR NOT NULL,
INSERT INTO "Files" VALUES(1,'aaa1',1);
INSERT INTO "Files" VALUES(2,'aaa2',1);
INSERT INTO "Files" VALUES(3,'bbb1',2);
INSERT INTO "Files" VALUES(4,'bbb2',2);
INSERT INTO "Files" VALUES(5,'bbb3',2);
INSERT INTO "Files" VALUES(6,'ccc1',3);
INSERT INTO "Files" VALUES(7,'zzz1',12);

Until now everything looks pretty simple, right.

What I want to obtain is the list of all files (in random order but 
that's not the point) containing:
Directory path from root using some kind of group_concat(dir, '/')
other columns from table Files not mentionned in the example above.

I've a simple WITH RECURSIVE view able to get the wanted data, but only 
for a given FileID (literally fixed in both recursive clause and final 

My myopia is how can I write a recursive clause (to group directories 
from bottom up) which refers to a varying starting repository directory.

I guess there must be a clever join needed but how and where?

Note that I don't want a complete list of hierarchical directories and 
files starting from root: just actual files with their path in natural 

TIA for your advices.

Re: [sqlite] Feature Request - RowCount

2014-12-14 Thread Jean-Christophe Deschamps

Hi Simon,

A) In that sqlite_sequence table you mentioned, as an additional 
column.  Always up-to-date.

But sqlite_sequence isn't always created. AFAIK it only exists when one 
or more table exists with an integer primary key autoincrement.

B) In the tables prepared by SQLite ANALYZE.  If you want the rowcount 
updated, do another ANALYZE.

But then the row count isn't always up-to-date and this would be 
essentially no different from requesting count(*) as per now.

An extra optional sqlite_rowcount table costs so little that I don't 
see a motivation to put row count data elsewhere. Also keeping it 
separate doesn't preclude reading the DB with a previous version of SQLite.

It's just a $0.02 idea of course.

sqlite-users mailing list

Re: [sqlite] Feature Request - RowCount

2014-12-14 Thread Jean-Christophe Deschamps

At 03:14 14/12/2014, you wrote:
I take the point that the only possible improvements seem to need 
alteration to the file structure or added maintenance which may use up 
cycles for something that just isn't that important to DB use in 
general - and I have to agree, I too have zero want for seeing more 
cycles used.


I too generally don't have had a use for instant row counting of 
massive tables on a routine basis. It seems that currently the most 
transparent way to have an instant row count is to use triggers, which 
are well known to slow down things significantly, or rely on 
application code to monitor counts and store them in another table.

Without using slow triggers or changing the v3 file format there is 
still another possibility which could be implemented relatively easily. 
All it would need is a new pragma (or internal function) like "pragma 
row_count=0/1" and some code.

On invokation, the engine would create a hidden "system" table like 
sqlite_rowcount --similar to sqlite_sequence-- which would initially 
hold row counts for every table in the DB.

When parsing the schema, the mere existence of this table would trigger 
the update of this table in subsequent operations.

One can even imagine that it could be possible to remove entries from 
this table corresponding to tables which are not needing an instant 

Disabling the feature would be as simple as negating the pragma or 
dropping the table.

This way, no operation would be significantly penalized unless this 
feature is actually used and the file format would remain identical.

Would this fit the v3 bill?

sqlite-users mailing list

Re: [sqlite] Will someone be able to explain this weird outcome...

2014-10-10 Thread Jean-Christophe Deschamps

select 7,915 - 5,021
select 7,  915 - 5,  021
7  91021
just like
select 'a', 915 - 5, 'b'

sqlite-users mailing list

Re: [sqlite] Crash when binding default column value

2014-09-24 Thread Jean-Christophe Deschamps

At 21:53 24/09/2014, you wrote:

> If the default cannot be represented exactly, its rounding will be the
> least of the difficulties.

Not always: in scientific applications I've had column values default 
to plenty of irrational numbers: fractions of pi, sqrt(2.0) etc.

My main gripe is when the value being read from a database isn't 
identical to the value written to the database. I've had problems in 
the past where conversion between double and text, e.g. sprintf() and 
atof(), isn't perfectly invertible even at 17+ significant figures. 
Perhaps this may only be an issue on some of the older platforms I've 
worked on or on traditional client/server architectures (with which 
I'm more familiar) where client and server can be different platforms. 
I haven't (yet) tested SQLite empirically, though I notice SQLite has 
its own atof() implementation sqlite3AtoF() which weakens my 
confidence that I'll get out exactly what float I put in represented 
as text. In my (admittedly limited) experience, IEEE754 
implementations are transferable, when endian-flipped as appropriate.

While it's only almost true that IEEE754 (which flavor?) are 
"transferable", you still seem to bark at the wrong tree.

Like other contributors have already said, you definitely can't rely on 
a double stored value to represent exactly a real literal you supply. 
For instance as shown below, 0.75 = 0.5 + 0.25 has a bounded 
representation as fractional binary (1/2 + 1/4). On the contrary, 0.3 
doesn't enjoy the same property and it would need an infinite string of 
decreasing powers of 2 to represent: its floating point representation 
is inexact.

0.75 is represented exactly   in IEEE754 by X'3FE8'
0.3  is represented approximately in IEEE754 by X'3FD3'

The finite set of reals (rational numbers in fact) that IEEE754 can 
represent is infinitesimally smaller than the set of reals, whatever 
finite number of bits the FP representation uses.

Here's a real-world scenario where this could cause an issue (I've 
actually encountered this for real) Imagine an application writer 
who's instructed my library to use pi (say) as a column's default. 
When reading the value out of the database, they may want to test if 
the value equals the default...

One way to get rid of that: use the output of

which gives:


That value is now safe. 

sqlite-users mailing list

Re: [sqlite] Divide by 0 not giving error

2014-09-16 Thread Jean-Christophe Deschamps

> This would means that if ever an SQL statement encounters divide by
> zero, the application will crash with no way handle the situation
> gracefully, nor to locate the source of the problem.

Seriously, what are you talking about?  Why is there "no way to handle"
the error, gracefully otherwise?  How do you know there would be no way
to "locate the source of the problem"?

I imagine an error SQLITE_EMATH returned by sqlite4_step.  With some
care, perhaps the expression returning zero could be mentioned in the
error text.  I can't imagine how that would present a problem.

Yes but raising an exception has been mentionned at some point in the 
discussion. I was just saying that doing so is pretty different from 
returning an error at function-level. 

sqlite-users mailing list

Re: [sqlite] Divide by 0 not giving error

2014-09-16 Thread Jean-Christophe Deschamps

What the SQL standard calls an "exception" is not necessarily exactly
the same as an exception in other programming languages.


If SQLite were to change the division-by-zero handling, it could be
reported exactly like most other errors, by returning SQLITE_ERROR from

It could as well deliver +/- Inf or Nan.

select 15 % 'abc' returns null as well.
Integer overflow could also raise some new SQLite-level error.
SQLite string functions could also raise new errors when invalid 
indices are provided, like substr("abc", 456, 17) which doesn't make 
any sense.

All in all I don't see any serious enough reason to change behaviors at 
this stage.
Complaints posted here about these error situations being kept silent 
are rather rare.

sqlite-users mailing list

Re: [sqlite] Divide by 0 not giving error

2014-09-16 Thread Jean-Christophe Deschamps

Dear forum,

MySQL does it too unless ERROR_FOR_DIVISION_BY_ZERO mode is enabled:

SQL Server does it too unless SET ARITHABORT is ON:

PostgreSQL doesn't do it, and that annoys some people:

> It's also unhelpful because the NULL can mask an error in the
> data or logic.  I hope SQL 4.0 will define it as an error instead.

I would rather it didn't because it's yet another thing to look for 
and trap in your code, and it's one that most people will not think of 
most of the time.  It expands the test suite.  It expands your software.

NULLs propagate harmlessly downstream.  You display or print them and 
they appear as something blank or "NULL" or otherwise weird.  If a 
user finds a field unexpectedly NULL, they can figure out what the 
source of the problem is and correct the data at their leisure.  The 
rest of the program continues to function.

There is another good reason why raising an exception would be a 
terrible choice. When SQLite is used as a shared library by some 
scripting language, there is /*no*/ possibility to trap exceptions 
raised within the library.

This would means that if ever an SQL statement encounters divide by 
zero, the application will crash with no way handle the situation 
gracefully, nor to locate the source of the problem.

I often see experienced people here completely disregard the contexts 
where SQLite is used this way and I find it is a form of myopia (or is 
that disdain?). Just because a share of users build applications in 
languages like C[++|#], Delphi, Python, Ruby, YouNameIt with SQLite 
statically linked or embedded in the language as a standard component, 
that shouldn't hide or dismiss different contexts which don't enjoy the 
same power.

Returning null may not be the best choice but I don't see that changing 
now. Float signed infinity or Nan could be considered, anything but not 
an exception.

Errors crash the program.  Start it up again and it might just crash 
again.  Or it might crash again some unpredictable but inconvenient 
time in the future.  And as a user you can't put it right because each 
time you open the window where you can type the correct data in, the 
program crashes.  You need the help of the developer.  At 5:30pm the 
day before The Big Report is due.


BTW I often read here that "_sqlite3_get_table is deprecated". This 
opinion is common but is very dommageable to future uses in contexts 
where calling a shared library comes with a significant time penalty.
This API has been there for very long, is well tested and has proven 
reliability. Should it be removed anytime in the future, countless 
applications written in some scripting languages will have to replace 
it by a loop of multiple calls to the shared library, slowing down 
applications dramatically (like 20-fold or more).

Why penalize a share of users and not let this API live in the library 
forever, and why not offer its counterpart in SQLite v4?

In general I see SQLite developpers take great care for widenning 
potential use contexts of SQLite, not limiting it. Some attention is 
requested to what some may call "unusual environments", which are 
nonetheless many users everyday's reality.

Thank you.

sqlite-users mailing list

Re: [sqlite] Insert optimization

2014-08-23 Thread Jean-Christophe Deschamps

How to improve the write speed then.

I've found that in most cases using "chained" insertion speeds up 
things dramatically:

insert into mytable (x, y, z) values (1, 2, 3), (4, 5, 6), (7, 8, 9), ...

If you insert K rows at a time this way your total insert time can by 
cut by a significant factor based on K. Always wrap bulk inserts in 

Also building index after insertion is beneficial.

sqlite-users mailing list

Re: [sqlite] Hexadecimal integer literals

2014-07-23 Thread Jean-Christophe Deschamps
Conversion of oversized hex into FP would break easily and reveal 
hardly reproductible across many platforms. Being a support for some 
languages fora I observe daily how FP inaccuracies is a real-world 
problem in simple-looking code.

The only reasonable thing I can foresee is treat hex as signed int64 
inside limits and as blobs outside int64 limits, all without a warning 
or error raised.

This way no new behavior is needed: what happens today when a hex blob 
appears somewhere? Just like when a column typed as INT holds few 
blobs: does the index build/search bomb: no. Oversize hex literals will 
simply behave the same.

sqlite-users mailing list

Re: [sqlite] Is sqlite case-sensitive?

2014-05-18 Thread Jean-Christophe Deschamps

At 22:36 18/05/2014, you wrote:

The more I
think of it, though, I think that the solution is as simple as converting
all letters to lower(/upper) case and converting all whitespace to a 

space each, except for within matching [ ], " ", ' ' or ` `. After that, I
can do a memcmp().

You're lucky that SQLite natively accepts Unicode schema names but is 
only case-insensitive over the 7-bit ASCII charset range, even when the 
ICU extension is built-in.

Requirement  R-26223-47623-19728-21792-34687-27643-09360-29816

Like other SQL identifiers, database names are case-insensitive. 
(source lang_naming.html, 
checked-by: tcl/e_resolve.test)

from is ambiguous and misleading.

SELECT * FROM étude;  -- no such table étude
SELECT * FROM Étude;  -- works

(I hope my E and e with acute accents show up on your side)

sqlite-users mailing list

Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps

At 21:35 03/03/2014, you wrote:
RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now 
not. A minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise 
disk SAS 15Krpm 146 GB 6G is $350, and a not enterprise grade cheaper 
and bigger. Now RAID1E and RAID10E give more flexibility and variable 
security, from "paranoid" to "i don't care" grades.


The point being discussed was not on performance or cost, but on the 
imaginary fact that RAID5-6 and variations have the inherent, by-design 
fatal flaw that they break down because a parity block can be out of 
sync with corresponding data blocks. This is bullshit, period.

Nothing in RAID5-6 design mandates serialization of writes, by far. 
It's only when cheap, unreliable hardware is put to work under below 
par software that the issue can be a real-world problem.

So the rant on the design against parity-enabled RAIDs is moot, if not 
plain fallacious unless "software RAID without dedicated controller" is 
clearly mentionned.

About SAS disks: they have actual very high reliability and don't lie, 
contrary to SATA disks (on both points).

This is not a war about mine being bigger, but it's better to have 
facts stated right. All high-end reliable machines and storage 
subsystems only run parity-enabled RAID levels and this thechnology 
isn't going to disappear tomorrow. 

sqlite-users mailing list

Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps

It's how RAID5 works. Check this page docs about it.

This is utter BS.

Serious RAID controllers perform parallel I/O on as many drives that 
are making up a given array. Of course I'm talking of SAS drives here 
with battery backed-up controller.

Kid sister RAID5-6 implementations using SATA drives and no dedicated 
hardware are best avoided and have more drawbacks than are listed in 
cited prose.

I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in RAID6 
and a couple more in RAID1 and I've yet to witness any problem whatsoever.

It's just like talking BS on a language because of some obscure bug in 
a non-conformant compiler. 

sqlite-users mailing list

Re: [sqlite] How should I use parenthesis?

2014-01-28 Thread Jean-Christophe Deschamps

Perhaps you want:

select * from (
   select * from A where x in (subselectA)
   left outer join (
select * from B where y in (subselectB)
   ) as a
   on -- something
) as A1


select * from (
   select * from B where y in (subselectC)
   left outer join (
select * from A where x in (subselectD)
   ) as b
   on -- something
) as B1

JOIN takes tables (or table-like objects) as operands.  SELECT does not
yield a table unless it's wrapped in parenthesis.

(I generally put UNION in uppercase to make it stand out, lest on a
quick scan it seem like two separate statements.)

Yes that's it. In fact the actual statement is much more complex than 
the sketch I typed, where I forgot to type the where condition and the 
outer select. And I got lost in nesting the parenthesis when 
"simplifying" (i.e. emasculating) my own example.

Thanks for the answers and sorry for the noise. I now have both eyes 
wide open...

sqlite-users mailing list

[sqlite] How should I use parenthesis?

2014-01-27 Thread Jean-Christophe Deschamps

Dear list,

I'm trying to find the correct syntaxt for this, but I hit a syntax 
error each time: either SQLite shokes on outer parenthesis or on union all.

select * from A where x in (subselectA)
left outer join
select * from B where y in (subselectB)

union all

(  -- <-- error
select * from B where y in (subselectC)
left outer join
select * from A where x in (subselectD)

Union [all], except, intersect don't seem to accept parenthesis around 
the left or right parts.

Of course a workaround could be to create two views then union all 
them, but there must be a way to express this construct in a single 

I don't want the statement to be interpreted this way:

select * from A where x in (subselectA)
left outer join
select * from B where y in (subselectB)
union all
select * from B where y in (subselectC)
left outer join
select * from A where x in (subselectD)


sqlite-users mailing list

Re: [sqlite] Concrete example of corruption

2013-12-11 Thread Jean-Christophe Deschamps

At 08:45 11/12/2013, you wrote:

My case wasn't about messing anything up, but simply moving a document.

Just like moving the canvas while the artist is painting? Or is it 
moving the notebook while the schoolkid writes on it?

This is prone to produce dribbles or worse, in my view. I fail to see 
what the canvas/notebook/brush/pen makers could do to prevent such misuse.

The fact that some filesystems of Unix-like OSes allow moving or 
deleting files while in use doesn't make it a sensible behavior in most 
other systems SQLite has been ported to. It is rather to the 
developpers of applications running on such ill-behaved systems to 
prevent disaster caused by malicious users.

This is similar to the backup of files while applications use them. 
Unless the OS has special provisions to deal with the situation (e.g. 
versionning) you can't guarantee that every file will be backed up in a 
consistent state.


sqlite-users mailing list

Re: [sqlite] Intended use case for 'without rowid'?

2013-11-17 Thread Jean-Christophe Deschamps

How so? An AFTER INSERT trigger has the value in new.yourPKColumn. Can 
you show a scenario where this is insufficient, and a hypothetical 
last_insert_primary_key would save the day?

You're correct, there is no situation where such a thing would make sense. 

sqlite-users mailing list

Re: [sqlite] Intended use case for 'without rowid'?

2013-11-15 Thread Jean-Christophe Deschamps

At 15:45 15/11/2013, you wrote:

last_insert_rowid() is needed for autoincremented keys.

WITHOUT ROWID tables do not have an autoincrementing key.  Your program
has to generate or get the key in some other way, so it knows the value
even before the record is inserted.

last_insert_primary_key would nevertheless be very useful in triggers.


sqlite-users mailing list

Re: [sqlite] collation for german "Umlaute"

2013-10-31 Thread Jean-Christophe Deschamps

Search the Internet for an SQLite extension called "unifuzz.c" and see
if that does what you want in the way of character folding.  I have a
copy of the code on my other computer if you cannot find the original
authors original code.

Here is the download link:

It is a very bad (yes!) collation collection, since none of them match 
any known locale collation requirement. If you need to collate German 
only but correctly, look at ICU.

What it offers is a set of locale-independant collations and functions, 
plus a bit more.

I wrote it after realizing there was no good way to collate and fuzzy 
search text from various languages intermixed in the same column, for 
instance (we had customers in 49 countries and suppliers from 10.) 
Since it's impossible to collate things correctly and simultaneously 
for several languages, the best was to case fold and/or unaccent data 
in the least damaging way.

The (largish) code has provision for dealing with the German eszet and 
several unique characters. I know it's being used in a number of 
countries, not all latin. It uses custom v5.1 Unicode tries in circa 180kb.

It also offers collation for Unicode digits (not only 0-9) and a good 
share of other string functions.

It is currently Windows-bound since it uses one Windows function, but 
I'm sure it can be made to work under Linux as well. Be sure to read 
the lengthy comment at top of the source before using or deciding it's 

Please drop me a note if you find it useful or discover bugs. Feel free 
to use and abuse the code, but please don't release a distinct version 
under the same name.


sqlite-users mailing list

Re: [sqlite] racing with date('now') (was: Select with dates)

2013-09-16 Thread Jean-Christophe Deschamps

There are other ways to get a consistent 'now' across an entire SQL

Of course!

This issue is sufficiently obscure that it is not worth adding (and 

and documenting and supporting forever) yet another pragma.

I personally wouldn't call it so "obscure" as it pops up regularly with 
untold consequences in a scripting language I use, but it's part of the 
programmer's responsability after all. Not a deal breaker anyway.


sqlite-users mailing list

Re: [sqlite] racing with date('now') (was: Select with dates)

2013-09-16 Thread Jean-Christophe Deschamps

There are perhaps 2 million applications in the wild that use SQLite, 
so it
will be difficult to check them all. But one can easily imagine that 
one or

more of those two million applications does something like this:

 SELECT current_timestamp, * FROM tab;

And then runs sqlite3_step() every five or ten seconds in a background
process to fetch a new row, and expects the timestamp on each row to
reflect the actual time of retrieval from disk.  Causing 'now' to mean
exactly the same time for an entire SQL statement would break such

As a compromise, the current SQLite trunk causes 'now' to be exactly the
same for all date and time functions within a single sqlite3_step() call.

Perhaps a new connection-wide "pragma freeze_now=ON;" which would 
freeze 'now' at the beginning of the next statement and unfreeze it at 
the next auto-commit or when processing a "pragma freeze_now=OFF;" 
would solve another part of the race issue.

Applications relying on the old behavior don't change and new ones 
where the possibility of race is dangerous can avoid it at little cost.

Still some multi-threaded apps would need to consider the implications, 
but we all know what you Richard think about threads!


sqlite-users mailing list

  1   2   3   4   5   >