[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread J Decker
On Sun, May 15, 2016 at 10:02 PM, Darren Duncan  
wrote:
> On 2016-05-15 9:56 PM, J Decker wrote:
>>
>> On Sun, May 15, 2016 at 9:29 PM, Darren Duncan 
>> wrote:
>>>
>>> On 2016-05-15 12:35 AM, Simon Slavin wrote:


 All true.  But it brings up a question.  Suppose the following:

 first   second
 -   --
 MarkSpark
 Emily   Spark
 MarySoper
 Brian   Soper

 SELECT first,second FROM members ORDER BY second LIMIT 3

 Without looking up either a standards document for SQL or the
 documentation for your favourite implementations of SQL, answer this
 question:

 Does the documentation for your favourite implementation of SQL state
 that
 you'll get the same rows every time you execute the above "SELECT" ?
>>>
>>>
>>> I think a proper solution for this then is to treat the LIMIT as
>>> approximate
>>> rather than exact; it indicates a desire rather than a promise.
>>>
>>> In the scenario you describe, the query should return either 2 rows or 4
>>> rows, so that ALL of the rows whose second field value of "Spark" are, or
>>> are not, returned.  Projecting this to there not being an ORDER BY
>>> clause,
>>> either all rows are returned or zero rows are returned.  Thus the result
>>> is
>>> deterministic.
>>
>>
>> even if it did for 'spark' rows (which either case you suggest would
>> be horrible) 'soper' would still be non-deterministic, and rebuilding
>> indexes could reorder the results.
>
>
> No, it is still deterministic.
>
> The ORDER BY clause specified a partial order of the results, not a total
> order.
>
> What I specified returns only complete groups of rows where within each
> group the rows are unordered but the groups as a whole are ordered relative
> to each other.
>
> The fact this is deterministic would probably be more clear if the result
> rows were nested, one outer row per "group" that I mentioned.  But even if
> not, the caller knew that they were only ordering by second but selecting
> first, so if they see multiple rows with the same second value, they know
> that those rows are not sorted between themselves, only that rows with
> different second values are sorted relative to each other.
>
> So fully deterministic.
>

'SELECT first,second FROM members ORDER BY second LIMIT 3' (that's
mysql format right?)

I don't see a full set as a requirement (such that the output would be
2 or 4 records and not the 3 I asked for...) .  the query implies 3
rows, not 3 sets.

SELECT first,second FROM members ORDER BY second LIMIT 3,3 (for the
next 3 lines I'm displaying on a form for instance)


and specifying that the result set includes a first name, the result
sets taken as a hole are not guaranteed equal (procedurally and in
practice they may be, but pessimistically...).



>
> -- Darren Duncan
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Mikael
2016-05-15 22:23 GMT+07:00 Mikael :
..

> Aha.
>
> Do you see any way that I could implement *one* layer of branches atop of
> this?
>
> Say 1000 or 10^30 of them.
>
>

Ah, make the selects for the timestamp OR the branchid.


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Mikael
2016-05-15 22:14 GMT+07:00 Simon Slavin :

>
> On 15 May 2016, at 4:02pm, Mikael  wrote:
>
> > Simon, yes using only INSERT and DELETE in total would be fine with me
> (and
> > perhaps BEGIN and COMMIT if it's OK).
> >
> > What are you thinking about?
> >
> > Did you see a purely SQL-based solution that I don't?
>
> BEGIN and COMMIT make no changes to stored data so you can ignore them for
> these purposes.
>
> Add a "timestamp" column to your table.  When you INSERT a row, have this
> default to the current time:
>
> , timestamp TEXT DEFAULT CURRENT_TIMESTAMP, ...
>
> Add a "deleted" column to your table, defaulting to NIL
>
> , deleted TEXT DEFAULT NIL, ...
>
> When you delete or overwrite a row, instead of actually using the DELETE
> command, replace the NIL in this column with the time the row was deleted.
>
> UPDATE myTable SET deleted=CURRENT_TIMESTAMP WHERE rowid=1234
>
> You now have a way of tracking when each row was created and, if it has
> been deleted, when it was deleted.  Creative use of a WHERE clause in your
> SELECT commands will allow you to pick out only the rows which exist at a
> certain time.
>
> If you have defined a clever primary key rather than just using a rowid,
> you will need to think through adjustments to your schema and programming
> accordingly.
>
> Simon.
>


Aha.

Do you see any way that I could implement *one* layer of branches atop of
this?

Say 1000 or 10^30 of them.


[sqlite] 64bit DLL vs 32bit

2016-05-15 Thread Bart Smissaert
So, is there any way that a 32 bit VB6 ActiveX dll or a32 bit VB6 ActiveX
exe could access a 64 bit SQLite dll?
All this only comes into play for me when using 64 bit VBA in Excel.
I have no problem to access the 64 bit SQLite dll from 64 bit VBA.

RBS

On Sun, May 15, 2016 at 7:12 PM, Richard Damon 
wrote:

> On 5/15/16 1:00 AM, dandl wrote:
>
>> But I think if you compile code for the x64 processor chip and call it
 from x86 or vice versa then either it doesn't work or you pay a high
 price for thunking from one to the other. I think that's unavoidable
 regardless of OS.

>>> Right: doesn't work.  There's no performance penalty because there's no
>>> 32-64 bit thunking layer.
>>>
>>>
>>> https://blogs.msdn.microsoft.com/oldnewthing/20081020-00/?p=20523
>>>
>> An interesting post, but not I think from someone with a deep
>> understanding
>> of the matter.
>>
>> IMHO it would be perfectly possible to chunk either way, within the
>> limitations of a 4GB address space. The WOW64 layer already provides the
>> means for x86 apps to call the x64 Windows API. A thunk can do anything,
>> unless it's physically impossible or blocked by policy.
>>
>> Regards
>> David M Bennett FACS
>>
>> Andl - A New Database Language - andl.org
>>
>> You can thunk an arbitrary 32 bit module to 64 bit code because the
> memory accessibly by the 32 bit code is all accessible by the 64 bit code.
> The converse doesn't work. Either the 64 bit code needs to make sure it
> puts the data in the lower 4GB, or the thunk layer needs to know enough
> about the data to copy it.
>
> This works for WOW64, as the 64 bit side is code designed to make it work
> with the thunking layer.
>
> For arbitrary 64 bit code, the thunking layer won't know enough about what
> needs to be copied into accessible memory.
>
> --
> Richard Damon
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread Darren Duncan
On 2016-05-15 9:56 PM, J Decker wrote:
> On Sun, May 15, 2016 at 9:29 PM, Darren Duncan  
> wrote:
>> On 2016-05-15 12:35 AM, Simon Slavin wrote:
>>>
>>> All true.  But it brings up a question.  Suppose the following:
>>>
>>> first   second
>>> -   --
>>> MarkSpark
>>> Emily   Spark
>>> MarySoper
>>> Brian   Soper
>>>
>>> SELECT first,second FROM members ORDER BY second LIMIT 3
>>>
>>> Without looking up either a standards document for SQL or the
>>> documentation for your favourite implementations of SQL, answer this
>>> question:
>>>
>>> Does the documentation for your favourite implementation of SQL state that
>>> you'll get the same rows every time you execute the above "SELECT" ?
>>
>> I think a proper solution for this then is to treat the LIMIT as approximate
>> rather than exact; it indicates a desire rather than a promise.
>>
>> In the scenario you describe, the query should return either 2 rows or 4
>> rows, so that ALL of the rows whose second field value of "Spark" are, or
>> are not, returned.  Projecting this to there not being an ORDER BY clause,
>> either all rows are returned or zero rows are returned.  Thus the result is
>> deterministic.
>
> even if it did for 'spark' rows (which either case you suggest would
> be horrible) 'soper' would still be non-deterministic, and rebuilding
> indexes could reorder the results.

No, it is still deterministic.

The ORDER BY clause specified a partial order of the results, not a total order.

What I specified returns only complete groups of rows where within each group 
the rows are unordered but the groups as a whole are ordered relative to each 
other.

The fact this is deterministic would probably be more clear if the result rows 
were nested, one outer row per "group" that I mentioned.  But even if not, the 
caller knew that they were only ordering by second but selecting first, so if 
they see multiple rows with the same second value, they know that those rows 
are 
not sorted between themselves, only that rows with different second values are 
sorted relative to each other.

So fully deterministic.

-- Darren Duncan



[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Mikael
2016-05-15 21:56 GMT+07:00 Simon Slavin :

>
> On 15 May 2016, at 3:52pm, Mikael  wrote:
>
> > Would there be any facility whereby after each transaction I do on a
> > database or table, I could somehow make a snapshot so that at any future
> > point in time, I could easily do a SELECT to a given version/snaphot?
>
> Do you need a solution which works for any possible SQL commands, or are
> you in a situation where you execute only some sorts of commands.
>
> For example, it is far simpler to solve this if you only execute INSERT
> and DELETE, never UPDATE.
>
> Simon.


Simon, yes using only INSERT and DELETE in total would be fine with me (and
perhaps BEGIN and COMMIT if it's OK).

What are you thinking about?

Did you see a purely SQL-based solution that I don't?


Also, if I could have a "snaphot *tree*" rather than one linear ladder
would be cool but I guess any insights are valuable now.


[sqlite] Clarify in docs that PRAGMA integrity_check also checks that index content matches table content?

2016-05-15 Thread Mikael
Hi,

The documentation for "PRAGMA quick_check" (
http://www.sqlite.org/pragma.html#pragma_quick_check) points out that
"PRAGMA integrity_check" will match index content with table content.

http://www.sqlite.org/pragma.html#pragma_integrity_check , which is the
actual documentation for that PRAGMA, does not document that however.

What about writing it there, for everyone to understand how robust "PRAGMA
integrity_check" really is?


(Without that clarification, it looks like it'll just check database format
correctness and the relative consistency of UNIQUE columns.)

Now that we're talking about it, does "PRAGMA integrity_check" do anything
even more beyond the abovementioned?

Thanks


[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread J Decker
On Sun, May 15, 2016 at 9:29 PM, Darren Duncan  
wrote:
> On 2016-05-15 12:35 AM, Simon Slavin wrote:
>>
>> On 15 May 2016, at 6:04am, Darren Duncan  wrote:
>>
>>> You seem to be forgetting the fact that LIMIT/OFFSET is not its own
>>> clause, rather it is an extension to the ORDER BY clause and only has
>>> meaning within the context of the ORDER BY it is part of.
>>
>>
>> All true.  But it brings up a question.  Suppose the following:
>>
>> first   second
>> -   --
>> MarkSpark
>> Emily   Spark
>> MarySoper
>> Brian   Soper
>>
>> SELECT first,second FROM members ORDER BY second LIMIT 3
>>
>> Without looking up either a standards document for SQL or the
>> documentation for your favourite implementations of SQL, answer this
>> question:
>>
>> Does the documentation for your favourite implementation of SQL state that
>> you'll get the same rows every time you execute the above "SELECT" ?
>
>
> I think a proper solution for this then is to treat the LIMIT as approximate
> rather than exact; it indicates a desire rather than a promise.
>
> In the scenario you describe, the query should return either 2 rows or 4
> rows, so that ALL of the rows whose second field value of "Spark" are, or
> are not, returned.  Projecting this to there not being an ORDER BY clause,
> either all rows are returned or zero rows are returned.  Thus the result is
> deterministic.

even if it did for 'spark' rows (which either case you suggest would
be horrible) 'soper' would still be non-deterministic, and rebuilding
indexes could reorder the results.

>
> Whether returning above or below the limit is done, is a separate thing to
> decide, though I suggest returning above is better.
>
> -- Darren Duncan
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Mikael
Hi!

Would there be any facility whereby after each transaction I do on a
database or table, I could somehow make a snapshot so that at any future
point in time, I could easily do a SELECT to a given version/snaphot?


Any solution based purely on SQL would be extremely expensive I guess (e.g.
introduce columns for snapshot index and deletedness).

Implementing my own VFS would be a good way I guess, but also extremely
complex.

Thoughts?

Thanks!


[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread Darren Duncan
On 2016-05-15 12:35 AM, Simon Slavin wrote:
> On 15 May 2016, at 6:04am, Darren Duncan  wrote:
>
>> You seem to be forgetting the fact that LIMIT/OFFSET is not its own clause, 
>> rather it is an extension to the ORDER BY clause and only has meaning within 
>> the context of the ORDER BY it is part of.
>
> All true.  But it brings up a question.  Suppose the following:
>
> first second
> - --
> Mark  Spark
> Emily Spark
> Mary  Soper
> Brian Soper
>
> SELECT first,second FROM members ORDER BY second LIMIT 3
>
> Without looking up either a standards document for SQL or the documentation 
> for your favourite implementations of SQL, answer this question:
>
> Does the documentation for your favourite implementation of SQL state that 
> you'll get the same rows every time you execute the above "SELECT" ?

I think a proper solution for this then is to treat the LIMIT as approximate 
rather than exact; it indicates a desire rather than a promise.

In the scenario you describe, the query should return either 2 rows or 4 rows, 
so that ALL of the rows whose second field value of "Spark" are, or are not, 
returned.  Projecting this to there not being an ORDER BY clause, either all 
rows are returned or zero rows are returned.  Thus the result is deterministic.

Whether returning above or below the limit is done, is a separate thing to 
decide, though I suggest returning above is better.

-- Darren Duncan



[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread dandl
> > 1. Why SQLite is popular.
> 
> The answers to those question mentioned in the podcast may be good ones
but I
> think the main reason is that it's free.  Completely, unmistakably, free.

Necessary but not sufficient.

It's free, and the licence is as non-restrictive as it is possible to be.

> You could make many changes to SQLite and people would continue to use it
but
> the thing that would decrease its usage fastest would be to charge for it.

Imposing licence conditions would come a close second. I'm not going to
mention GPL (or AGPL) but there are many conditions found in licence
agreements that run more than a para or two that would make it impossible to
use in particular applications.

Free AND non-restrictive licence is the killer combo for getting software
used, especially when it can be embedded.

> Nobody seems to mention this as an answer to that question.

Nobody wants to speak ill of more restrictive licence agreements either.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







[sqlite] using cerod/sqlite with JDBC

2016-05-15 Thread Tal Tabakman
Hi
is there a way to open an sqlite DB , compressed with CEROD, using java?s 
sqlite JDBC connection ?
thanks
Tal


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Dominique Devienne
On Sunday, May 15, 2016, Mikael  wrote:
>
> Would there be any facility whereby after each transaction I do on a
> database or table, I could somehow make a snapshot so that at any future
> point in time, I could easily do a SELECT to a given version/snaphot?


WAL basically does this already. But only in a transient manner.
If you could retain the WAL file and never truncate it, you could.
That's basically how Oracle does Flashback queries.
But you also need SQL extension to say AS OF time stamp or SCN.
Can also be at the API level (OCISnapshot* arg in OCI). --DD


[sqlite] Illegal hexadecimal number literal

2016-05-15 Thread gwenn
Hello,

SQLite version 3.8.10.2 2015-05-20 18:17:19
sqlite> select 0x1g;
1
sqlite> select 1g;
Error: unrecognized token: "1g"

The illegal number 0x1g is not rejected.

Maybe, in tokenice.c,
This block should not return directly but check that z[i] is not an IdChar
  if( z[0]=='0' && (z[1]=='x' || z[1]=='X') && sqlite3Isxdigit(z[2]) ){
for(i=3; sqlite3Isxdigit(z[i]); i++){}
return i;
  }

Regards.


[sqlite] Computed column or create index on a view ?

2016-05-15 Thread Jean-Luc Hainaut

- No index on a view in SQLite (so far).

- A computed column can be maintained through appropriate triggers (here, "on 
insert" and "on update"). Efficient if you read data more than you modify them.

- Perhaps trying this: 
create table readings(...);
create index trg_cx on readings(timestamp,(20+(a+b)/(c+c)));

J-L Hainaut


>I know that SQLite does not currently implement these things but I'm curious 
>if anyone else wants them and how hard they would be to implement.
>
>I have what you might consider to be a computed column.  You might imagine
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL)
>
>and I constantly need to evaluate
>
>pressure = 20+(a+b)/(c+c)
>
>What I really want from SQLite is to support computed columns.  I don't really 
>care which syntax is used but perhaps
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL,
>(20+(a+b)/(c+c)) AS pressure)
>
>... or perhaps ...
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL,
>pressure = (20+(a+b)/(c+c)))
>
>One can then, of course, do
>
>CREATE INDEX r_tp ON readings (timestamp,pressure DESC)
>
>That's my ideal.  Second choice would be to be able to create an index on a 
>VIEW:
>
>CREATE TABLE readings
>(timestamp TEXT PRIMARY KEY,
>a REAL, b REAL, c REAL);
>CREATE VIEW r_t_p (timestamp,pressure)
>AS SELECT timestamp,(20+(a+b)/(c+c)) FROM readings;
>CREATE INDEX r_tp ON VIEW r_t_p (timestamp, pressure DESC)
>
>At the moment I have to simulate the above abilities by creating both the VIEW 
>and an index with a calculated column independently, and even after that I 
>have to do two fetches to get the row of data I need.
>
>Surely I'm not alone in thinking that since SQLite now implements expressions 
>in indexes computed columns are a natural addition to SQLite at this time ?
>
>Simon.
>___
>sqlite-users mailing list
>sqlite-users at mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users 



[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Tony Papadimitriou
>> > 1. Why SQLite is popular.
>>
>> The answers to those question mentioned in the podcast may be good ones 
>> but I
>> think the main reason is that it's free.  Completely, unmistakably, free.
>
>Necessary but not sufficient.
>It's free, and the license is as non-restrictive as it is possible to be.

Certainly true!  However, there are tons of free & liberally licensed 
software out there, many (most?) of which are failures in terms of public 
acceptance.

So, even these two alone do not seem to be entirely sufficient.

But SQLite has one greater attribute.  It comes with a proven commitment of 
EXCELLENT support & maintenance.  A true quality product.  Bugs are killed 
practically instantly after being discovered, and new features added on a 
regular basis.  You rarely get this kind of support even from paid software.

(Many open source projects have bugs waiting for months or years for someone 
to be bothered to fix, often driving people away!)

To sum it up, a big thanks to Richard and his team!



[sqlite] Clarify in docs that PRAGMA integrity_check also checks that index content matches table content?

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 3:59pm, Mikael  wrote:

> What about writing it there, for everyone to understand how robust "PRAGMA
> integrity_check" really is?
> 
> 
> (Without that clarification, it looks like it'll just check database format
> correctness and the relative consistency of UNIQUE columns.)

It does check that the values stored in the index match the values stored in 
the table, and that the index does show the rows sorted in the right order.  It 
does a bunch of similar stuff relating to schema and primary keys.  It also 
checks a number of things stored in the database which you don't realise exist 
unless you've read the documentation on the file format.

If you want to know all it does you'll have to read the source code.

However, the normal note applies: if you absolutely need to know that your 
database is completely consistent (i.e. you're sending it to the Pluto Rover 
and retransmission is hideously expensive) then either perform a VACUUM or send 
the database as a text file of INSERT commands.

Simon.


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 4:23pm, Mikael  wrote:

> Do you see any way that I could implement *one* layer of branches atop of
> this?

I suspect that branching would be harder to implement.  Storing a branch code 
in another column of the table isn't going to help much.  You'd need to create 
some kind of data structure within your database which reflects how your 
branches are related and then do even more work.  Maybe someone else has 
figured it out.

Simon.


[sqlite] Computed column or create index on a view ?

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 4:08pm, Jean-Luc Hainaut  
wrote:

> - A computed column can be maintained through appropriate triggers (here, "on 
> insert" and "on update"). Efficient if you read data more than you modify 
> them.

Hmm.  Yes, you're right.  Your way of simulating it is better than my way of 
simulating it.  It just needs triggers on INSERT and UPDATE, and I can create 
an index on the table itself and I don't need a view.  That's neat.  Thanks.

Simon.


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 4:02pm, Mikael  wrote:

> Simon, yes using only INSERT and DELETE in total would be fine with me (and
> perhaps BEGIN and COMMIT if it's OK).
> 
> What are you thinking about?
> 
> Did you see a purely SQL-based solution that I don't?

BEGIN and COMMIT make no changes to stored data so you can ignore them for 
these purposes.

Add a "timestamp" column to your table.  When you INSERT a row, have this 
default to the current time:

, timestamp TEXT DEFAULT CURRENT_TIMESTAMP, ...

Add a "deleted" column to your table, defaulting to NIL

, deleted TEXT DEFAULT NIL, ...

When you delete or overwrite a row, instead of actually using the DELETE 
command, replace the NIL in this column with the time the row was deleted.

UPDATE myTable SET deleted=CURRENT_TIMESTAMP WHERE rowid=1234

You now have a way of tracking when each row was created and, if it has been 
deleted, when it was deleted.  Creative use of a WHERE clause in your SELECT 
commands will allow you to pick out only the rows which exist at a certain time.

If you have defined a clever primary key rather than just using a rowid, you 
will need to think through adjustments to your schema and programming 
accordingly.

Simon.


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 3:52pm, Mikael  wrote:

> Would there be any facility whereby after each transaction I do on a
> database or table, I could somehow make a snapshot so that at any future
> point in time, I could easily do a SELECT to a given version/snaphot?

Do you need a solution which works for any possible SQL commands, or are you in 
a situation where you execute only some sorts of commands.

For example, it is far simpler to solve this if you only execute INSERT and 
DELETE, never UPDATE.

Simon.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 3:17pm, Scott Robison  wrote:

> True. Last December I received a notice that a patch I submitted to a Trac
> plugin had finally been accepted. Over seven years after I submitted it.

I once submitted a bug report to Apple.  Four months later it was acknowledged 
as a duplicate as one already in their database.  (You can't look up bugs other 
people have submitted so I couldn't follow its progress.).  Seven years later 
it still isn't fixed.

Simon.


[sqlite] Computed column or create index on a view ?

2016-05-15 Thread Simon Slavin
I know that SQLite does not currently implement these things but I'm curious if 
anyone else wants them and how hard they would be to implement.

I have what you might consider to be a computed column.  You might imagine

CREATE TABLE readings
(timestamp TEXT PRIMARY KEY,
a REAL, b REAL, c REAL)

and I constantly need to evaluate

pressure = 20+(a+b)/(c+c)

What I really want from SQLite is to support computed columns.  I don't really 
care which syntax is used but perhaps

CREATE TABLE readings
(timestamp TEXT PRIMARY KEY,
a REAL, b REAL, c REAL,
(20+(a+b)/(c+c)) AS pressure)

... or perhaps ...

CREATE TABLE readings
(timestamp TEXT PRIMARY KEY,
a REAL, b REAL, c REAL,
pressure = (20+(a+b)/(c+c)))

One can then, of course, do

CREATE INDEX r_tp ON readings (timestamp,pressure DESC)

That's my ideal.  Second choice would be to be able to create an index on a 
VIEW:

CREATE TABLE readings
(timestamp TEXT PRIMARY KEY,
a REAL, b REAL, c REAL);
CREATE VIEW r_t_p (timestamp,pressure)
AS SELECT timestamp,(20+(a+b)/(c+c)) FROM readings;
CREATE INDEX r_tp ON VIEW r_t_p (timestamp, pressure DESC)

At the moment I have to simulate the above abilities by creating both the VIEW 
and an index with a calculated column independently, and even after that I have 
to do two fetches to get the row of data I need.

Surely I'm not alone in thinking that since SQLite now implements expressions 
in indexes computed columns are a natural addition to SQLite at this time ?

Simon.


[sqlite] 64bit DLL vs 32bit

2016-05-15 Thread dandl
> > But I think if you compile code for the x64 processor chip and call it
> > from x86 or vice versa then either it doesn't work or you pay a high
> > price for thunking from one to the other. I think that's unavoidable
> > regardless of OS.
> 
> Right: doesn't work.  There's no performance penalty because there's no
> 32-64 bit thunking layer.
> 
>   https://blogs.msdn.microsoft.com/oldnewthing/20081020-00/?p=20523

An interesting post, but not I think from someone with a deep understanding
of the matter.

IMHO it would be perfectly possible to chunk either way, within the
limitations of a 4GB address space. The WOW64 layer already provides the
means for x86 apps to call the x64 Windows API. A thunk can do anything,
unless it's physically impossible or blocked by policy.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 2:14pm, mikeegg1  wrote:

> As a side thought of what I?ve listened to so far? Is there a 
> page/reference/discussion about how the consortium was/is set up?
> I am integrating SQLite into my Mac OS X application and like SQLite. I am 
> curious how the consortium is organized.

You can read this page



but there's more information on the consortium, how it came about, and what 
it's like now, in that podcast than I've seen anywhere else.

Simon.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Stephan Beal
On Sun, May 15, 2016 at 2:29 PM, Tim Streater  wrote:

> What's all this about licences. AIUI, SQLite is explicitly in the public
> domain. Meaning the question of licence doesn't arise.
>

it does, actually, because PD is not recognized in all jurisdictions.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread dandl
> I suggest the reason LIMIT hasn't been standardized is that it's contrary
to
> the fundamental idea that rows in a table have no meaningful order.  SQL
> doesn't honor relational theory with complete
> fidelity, but at least that horse is still in the barn.

Point 1: I think you'll find plenty of bits of SQL that are not relational
-- that's not the reason.
Point 2: LIMIT is perfectly relational, as long as it is applied with ORDER
BY. While the data set has no order, that in no way prevents performing a
query that does. Such as:

"Show me the customers that are within the first 10 when ordered by name."

You can't sort the relation, but you can certainly apply an order when
performing a query. How else would MIN() work?

> The problem with LIMIT is it's not based in the data.  Cutting off results
at
> some arbitrary N tells you *nothing* about the data other than that N or
more
> rows met the criteria.  Note that predicate logic has constructs for "for
> all" and "there exists" , but not "are some"!
> 
> I have yet to see a query using LIMIT 1 posted on this list that cannot be
> expressed -- better, IMO -- with min().  Queries that limit the results to
> "top N" to support things like pagination inevitably include assumptions
> about transactionality (or lack thereof) that are either invalid or ill-
> considered.  Every one would be better served either by just fetching the
> needed rows as required (and letting pending rows pend), or by supplying
the
> last "high" value as a minimum for the WHERE clause instead of an OFFSET.
> Were I a fan of conspiracies, I'd suspect the LIMIT-OFFSET constructs were
> invented by antilogicians to prevent learning and hobble performance.

Your criticism re LIMIT is mistaken. It is a perfectly reasonably way to
support pagination, and can actually be performed (rather laboriously) using
SQL Window functions (which Sqlite does not have). It can also be useful in
some algorithms where you need the top 2 or 3 or whatever.

Your criticism re OFFSET has some basis. It is usually better (as you
suggest) to provide a previous row value for pagination. It's just that once
you've done LIMIT, OFFSET is easy to implement and sometimes useful.

> By the way, i'm also a LIMIT club member, with limits.  I use it for
> convenience on the command line while exploring data.  It's great for
that,
> in the absence of pager support.  Maybe keeping it a little "weird" will
help
> remind new developers to use it as a convenience instead of a crutch.

I think most developers are just astonished at how much it differs from one
dialect to another. Astonished, but not impressed.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







[sqlite] 64bit DLL vs 32bit

2016-05-15 Thread dandl
> > Actually, it's everyone using a language other than C/C++, plus a
> > proportion of those too. I use C#, but if you want to call Sqlite from
> > Java, Python, etc or even some generic C/C++ app that supports
> > plug-ins, then at some point there is a DLL
> 
> How does that follow?  Any higher-than-C language has its own binding
system,
> and SQLite is a module of some kind, where the C library is wrapped in the
> module that exposes its own API.  If the module statically links in
> libsqlite3.a -- as, arguably, it should -- then there's no version
ambiguity,
> no DLL, and no chance of conflict.

Then I think you misunderstood.

You dropped in a 'module' without mentioning that this would have to be
written in C/C++. There is absolutely no way to call Sqlite statically from
any language other than C (or one of the rare languages that implement a
C-compatible ABI). So:

[Here HLL means Java/C#/Python/Perl/etc]

HLL -> Sqlite, DLL
HLL -> C-module dynamically linked -> Sqlite, DLL
HLL -> C-module statically linked -> Sqlite, no DLL

The reason this matters is that every 'module' injects its own world view.
If you want to call Sqlite from a HLL and make it look like other databases
then you use a 'module'. But if you want to call Sqlite from a HLL and see
exactly the same API that libpq exposes then you call the libpq DLL.

This is what I do in Andl. I need to control the communications channel in
ways that your 'modules' do not allow, so I use the libpq DLL.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







[sqlite] 64bit DLL vs 32bit

2016-05-15 Thread Richard Damon
On 5/15/16 1:00 AM, dandl wrote:
>>> But I think if you compile code for the x64 processor chip and call it
>>> from x86 or vice versa then either it doesn't work or you pay a high
>>> price for thunking from one to the other. I think that's unavoidable
>>> regardless of OS.
>> Right: doesn't work.  There's no performance penalty because there's no
>> 32-64 bit thunking layer.
>>
>>  https://blogs.msdn.microsoft.com/oldnewthing/20081020-00/?p=20523
> An interesting post, but not I think from someone with a deep understanding
> of the matter.
>
> IMHO it would be perfectly possible to chunk either way, within the
> limitations of a 4GB address space. The WOW64 layer already provides the
> means for x86 apps to call the x64 Windows API. A thunk can do anything,
> unless it's physically impossible or blocked by policy.
>
> Regards
> David M Bennett FACS
>
> Andl - A New Database Language - andl.org
>
You can thunk an arbitrary 32 bit module to 64 bit code because the 
memory accessibly by the 32 bit code is all accessible by the 64 bit 
code. The converse doesn't work. Either the 64 bit code needs to make 
sure it puts the data in the lower 4GB, or the thunk layer needs to know 
enough about the data to copy it.

This works for WOW64, as the 64 bit side is code designed to make it 
work with the thunking layer.

For arbitrary 64 bit code, the thunking layer won't know enough about 
what needs to be copied into accessible memory.

-- 
Richard Damon



[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Tim Streater
On 15 May 2016 at 11:53, dandl  wrote:

>>> 1. Why SQLite is popular.
>>
>> The answers to those question mentioned in the podcast may be good ones
> but I
>> think the main reason is that it's free.  Completely, unmistakably, free.
>
> Necessary but not sufficient.
>
> It's free, and the licence is as non-restrictive as it is possible to be.

What's all this about licences. AIUI, SQLite is explicitly in the public 
domain. Meaning the question of licence doesn't arise.

--
Cheers  --  Tim


[sqlite] Any way to do inexpensive SQLite database/table versioning? (I.e. cheap snapshotting & jumping between)

2016-05-15 Thread mikeegg1
I was once told of an idea (decades ago) of versioning data within a table 
where one column has a real/float value that is the version number. The data in 
the table can be committed as necessary. If the data needs to be rolled back 
the data can be rolled back/deleted to the table based on the real/float value. 
If necessary data can be inserted into the table between real/float values, 
sort of inserting data into the table like ?fixing? the events in time.

Mike

> On May 15, 2016, at 10:23, Mikael  wrote:
> 
> 2016-05-15 22:14 GMT+07:00 Simon Slavin :
> 
>> 
>> On 15 May 2016, at 4:02pm, Mikael  wrote:
>> 
>>> Simon, yes using only INSERT and DELETE in total would be fine with me
>> (and
>>> perhaps BEGIN and COMMIT if it's OK).
>>> 
>>> What are you thinking about?
>>> 
>>> Did you see a purely SQL-based solution that I don't?
>> 
>> BEGIN and COMMIT make no changes to stored data so you can ignore them for
>> these purposes.
>> 
>> Add a "timestamp" column to your table.  When you INSERT a row, have this
>> default to the current time:
>> 
>> , timestamp TEXT DEFAULT CURRENT_TIMESTAMP, ...
>> 
>> Add a "deleted" column to your table, defaulting to NIL
>> 
>> , deleted TEXT DEFAULT NIL, ...
>> 
>> When you delete or overwrite a row, instead of actually using the DELETE
>> command, replace the NIL in this column with the time the row was deleted.
>> 
>> UPDATE myTable SET deleted=CURRENT_TIMESTAMP WHERE rowid=1234
>> 
>> You now have a way of tracking when each row was created and, if it has
>> been deleted, when it was deleted.  Creative use of a WHERE clause in your
>> SELECT commands will allow you to pick out only the rows which exist at a
>> certain time.
>> 
>> If you have defined a clever primary key rather than just using a rowid,
>> you will need to think through adjustments to your schema and programming
>> accordingly.
>> 
>> Simon.
>> 
> 
> 
> Aha.
> 
> Do you see any way that I could implement *one* layer of branches atop of
> this?
> 
> Say 1000 or 10^30 of them.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] Time & between, midnight wrap around

2016-05-15 Thread Scott Robison
Off the top of my head:

Select case when t1 < t2 then t3 between t1 and t2 when t1 > t2 then t3 >=
t1 or t3 <= t2 end

You might need to tweak it to handle the case when t1 = t2 if needed. Could
mean one minute or could mean 24 hours depending on your pov.
Hello,
BETWEEN doesn't give the result I would need when used with time before &
after midnight:

SELECT time('23:00') BETWEEN time('22:00') AND time ('23:30');
time

1

sqlite> SELECT time('23:00') BETWEEN time('22:00') AND time ('01:00');
time

0

Any suggestion how to tweak the query to give "true" as the result even
with midnight wrap around?
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 6:04am, Darren Duncan  wrote:

> You seem to be forgetting the fact that LIMIT/OFFSET is not its own clause, 
> rather it is an extension to the ORDER BY clause and only has meaning within 
> the context of the ORDER BY it is part of.

All true.  But it brings up a question.  Suppose the following:

first   second
-   --
MarkSpark
Emily   Spark
MarySoper
Brian   Soper

SELECT first,second FROM members ORDER BY second LIMIT 3

Without looking up either a standards document for SQL or the documentation for 
your favourite implementations of SQL, answer this question:

Does the documentation for your favourite implementation of SQL state that 
you'll get the same rows every time you execute the above "SELECT" ?

Simon.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Simon Slavin

On 15 May 2016, at 6:35am, Roman Fleysher  
wrote:

> 1. Why SQLite is popular.

The answers to those question mentioned in the podcast may be good ones but I 
think the main reason is that it's free.  Completely, unmistakably, free.  You 
could make many changes to SQLite and people would continue to use it but the 
thing that would decrease its usage fastest would be to charge for it.

Nobody seems to mention this as an answer to that question.

Simon.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Scott Robison
On May 15, 2016 8:06 AM, "Tony Papadimitriou"  wrote:
>>>
> (Many open source projects have bugs waiting for months or years for
someone to be bothered to fix, often driving people away!)
>
> To sum it up, a big thanks to Richard and his team!

True. Last December I received a notice that a patch I submitted to a Trac
plugin had finally been accepted. Over seven years after I submitted it.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread mikeegg1
I?m listening to the podcast now. Great episode and I?ve subscribed to their 
podcast series.
As a side thought of what I?ve listened to so far? Is there a 
page/reference/discussion about how the consortium was/is set up?
I am integrating SQLite into my Mac OS X application and like SQLite. I am 
curious how the consortium is organized.

Mike


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Scott Robison
On May 15, 2016 6:30 AM, "Tim Streater"  wrote:
>
> What's all this about licences. AIUI, SQLite is explicitly in the public
domain. Meaning the question of licence doesn't arise.

The question of license arises when comparing two pieces of software. While
PD isn't a license per se, it is license-esque, is the absolutely least
restrictive license possible (or at least that I can imagine), and is
(should be) easy to understand. It makes it easy to evaluate as long as you
are in a jurisdiction that recognizes PD and aren't dealing with nervous
lawyers.

A very complex license could be written that was conceptually equivalent to
PD. If so, PD would be the superior license from a comprehensibility POV.


[sqlite] Podcast with Dr Hipp: SQLite history, success and funding

2016-05-15 Thread Roman Fleysher
Dear Richard,
Dear SQLiters,

Thank you, Simon, for sending the link. I would like to offer several comments 
on the podcast. 

1. Why SQLite is popular.

Instead of describing how I selected SQLite to solve our DB needs, I will 
recount story of Sony, its introduction of transistor radio that I read in 
Innovator's Dilemma by Clayton Christensen. (Very good book and author, I 
recommend.)

First transistor radios were poor in sound quality compared to those based on 
vacuum tubes. But they were lite (misspelled intentionally), and small. They 
were bought by teenagers, because they were cheap and portable. The big radio 
manufacturers did not even consider transistor radios as competitors because 
traditional competition was based on sound quality, not portability. Over the 
years, transistor technology improved and all vacuum radio manufacturers 
disappeared.

Richard said: "We do not compete against Oracle, we compete against fopen()." 
This is true, just like transistors. But SQLite displaced many big DBs and now 
Oracle etc have smaller market share. If I apply ideas of Innovator's Dilemma, 
their market share will continue to shrink. (I am not an MBA, I am a physicist, 
could be wrong but looks reasonable.)

2. Job to do

This is related to 1, and to ideas I read in Clayton Christensen books. 

Many SQL databases are very similar in what they can do, performance etc. Thus 
SQLite wins, just like Sony's first transistors, because it does NOT compete 
with them. It can not handle huge write concurrency or optimize for similar 
requests over history. Instead, it is easy to install and use. Its column 
types, affinity, makes SQLite suitable for both relational and 
entity?attribute?value models. 

It turns out that many "customers" simply do not need the functionality and 
optimization offered by big DBs. Instead, like teenagers, they need 
portability, ease of use and set up. This solves the job. Big DBs are overkill 
for such "small" jobs, requiring a lot of learning and expense. But there are a 
LOT of these small jobs and SQLite solves them admirably.

3. Code rewrite, robustness, licensing

Code rewrite or static linking make the final product more robust. Robustness 
simplifies support and debugging. Robustness attracts users. We all want OUR 
thing to work and if our thing depends on SQLite, we want SQLite to be robust. 
And thus, the SQLite licensing.

4. Fossil and other in-house software

Writing your own code is driven by the lack of needed features in available 
products. In the beginning, Ford had to build its own metallurgy plant to 
ensure quality of metal. This and 3 above are integration of what is not good 
enough to make it good together. Over the years, metallurgy industry matured 
and Ford closed this division. 

There are many other aspects in the podcast that I would like to comment. Even 
when Richard tells the story and many elements look accidental, they all fit 
into the timeline of unfolding disruptive innovation. 

SQLite was and is a disruptive innovation. SQLite is not a toy. 

Thank you for making it.


Roman



From: sqlite-users-bounces at mailinglists.sqlite.org [sqlite-users-bounces at 
mailinglists.sqlite.org] on behalf of Simon Slavin [slav...@bigfraud.org]
Sent: Saturday, May 14, 2016 4:17 PM
To: SQLite mailing list
Subject: [sqlite] Podcast with Dr Hipp: SQLite history, success and funding

Those interested in SQLite might like to listen to



Play on the page or download as an MP3.

Unusual information on Dr Hipp's early career, SQLite history, HWACI, and how 
come SQLite is free but the developers still manage to afford food and 
somewhere to sleep.

Question to ponder before you listen: Many of you know about tiny devices which 
incorporate SQLite but what do you think the biggest one is ?

Simon.
___
sqlite-users mailing list
sqlite-users at mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users