Re: [sqlite] New word to replace "serverless"

2020-01-28 Thread Darren Duncan
The concepts I like the best so far are "in-process" or "integrated" or 
something library-themed. -- Darren Duncan


On 2020-01-27 2:18 p.m., Richard Hipp wrote:

For many years I have described SQLite as being "serverless", as a way
to distinguish it from the more traditional client/server design of
RDBMSes.  "Serverless" seemed like the natural term to use, as it
seems to mean "without a server".

But more recently, "serverless" has become a popular buzz-word that
means "managed by my hosting provider rather than by me."  Many
readers have internalized this new marketing-driven meaning for
"serverless" and are hence confused when they see my claim that
"SQLite is serverless".

How can I fix this?  What alternative word can I use in place of
"serverless" to mean "without a server"?

Note that "in-process" and "embedded" are not adequate substitutes for
"serverless".  An RDBMS might be in-process or embedded but still be
running a server in a separate thread. In fact, that is how most
embedded RDBMSes other than SQLite work, if I am not much mistaken.

When I say "serverless" I mean that the application invokes a
function, that function performs some task on behalf of the
application, then the function returns, *and that is all*.  No threads
are left over, running in the background to do housekeeping.  The
function does send messages to some other thread or process.  The
function does not have an event loop.  The function does not have its
own stack. The function (with its subfunctions) does all the work
itself, using the callers stack, then returns control to the caller.

So what do I call this, if I can no longer use the word "serverless"
without confusing people?

"no-server"?
"sans-server"?
"stackless"?
"non-client/server"?

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No such column

2019-12-24 Thread Darren Duncan

On 2019-12-22 10:48 p.m., Keith Medcalf wrote:

On Sunday, 22 December, 2019 23:20, Aydin Ozgur Yagmur  
wrote:

I have experienced a weird problem. I have been using sqlite database in
linux by mounting.


Can you give some clues what "using sqlite database in linux by mounting" means?


My first thought on reading that was that "mounting" meant using the ATTACH 
command, since in the analogy of the SQL environment as a filesystem, using 
ATTACH is like mounting a volume within the filesystem in order to access the 
volume's contents, which are tables. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine the column type?

2019-12-14 Thread Darren Duncan

František,

Having done this myself, I will tell you that:

The only effective design for your "generic software that should work with any 
database model" is that every column is the universal type, the union of all 
other types; the type is the set of all representable values.


With respect to SQLite, every column is the union of: Null, every Integer, every 
Float, every Text, every Blob.


With respect to a generic application programming language, every column is of 
the most generic type, such as "Object" or "Any" for examples.


Now, if you want to be more precise, you can say that the type of every column 
is the union of all values currently in it.  This means that the type of an 
empty column is the empty type consisting of zero values, which is a subset of 
all other types just as the universal type is the superset of all other types.


Generally speaking, you want to support union types.

Do you have any questions to help you understand this?

-- Darren Duncan

On 2019-12-13 2:49 p.m., František Kučera wrote:

I know that SQLite uses dynamic types, so it is not easy… But what is the best 
way to determine the column type of a result set?

The sqlite3_column_decltype() works only if I select directly a column, but not 
when I do some other operations (call function, increment etc.).

The sqlite3_column_type() works for while iterating over particular rows. I can 
fetch the first row and get type here (expecting that all values in that column 
will have same type), but the problem is a) if the first value is NULL or b) if 
the result set is empty.

If I have e.g. "SELECT a + 1 AS x FROM t", is there a way to say that the x will be 
numeric? Or if I have "SELECT 1+1 AS x"?

I am writing a generic software that should work with any database model 
(tables are defined by the user) and I need to know the types, so I can 
properly present the results. Currently I use sqlite3_column_decltype() and 
will add options so the user could explicitly specify types of particular 
columns, but I am looking for a better way…


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Generated columns in primary keys, indexes, and constraints

2019-10-30 Thread Darren Duncan

On 2019-10-30 12:52 p.m., Keith Medcalf wrote:


On Wednesday, 30 October, 2019 13:23, Darren Duncan  
wrote:


On 2019-10-30 12:02 p.m., Simon Slavin wrote:

On 30 Oct 2019, at 6:56pm, Darren Duncan  wrote:



"Generated columns may not be used as part of the PRIMARY KEY. (Future
versions of SQLite might relax this constraint for STORED columns.)"

Replace with this:

"VIRTUAL generated columns may not be used as part of the PRIMARY
KEY."



What happens if the generated column is based on a calculation
including rowid ?  At the time the values are calculated, rowid might not
yet be known.



I suggest one option is adding a restriction that rowid may not be used
directly or indirectly in a STORED generated column and only in a VIRTUAL one.


The rowid value is always known so this is not an issue.


One way or another I believe this is surmountable and stored generated
columns in a primary key is worth it.


In a rowid table, the primary key is the implicit or explicit rowid.  "primary key(...)" in such tables is merely and 
alternate spelling of unique(...), and you can specify an index on a generated column computed from the (explicit) rowid.  But 
since the rowid is already the primary key you cannot specify another one and if you do then your spelling "primary 
key" is automatically translated into the spelling "unique".  However, you must actually spell "primary 
key" as "unique" to generate a unique index on a generated column in a rowid table even if you have not explicitly 
declared the rowid column.

The restriction is that you cannot use a generated column as the primary key of 
a WITHOUT ROWID table, however, for those tables there is no rowid so something 
has to be the actual primary key since it cannot be the rowid.  However, in 
order to use a generated column as a primary key in a WITHOUT ROWID table, the 
value would have to be stored.  You can still create other indexes using the 
generated columns (whether virtual or stored) but you still need a stored 
primary key.  Lifting the restriction on the use of a stored generated column 
as the primary key for a without rowid table is the only restriction that could 
be lifted.


Okay, I think we're on the same page, so to be clear the scenario I want to have 
supported is that a STORED generated column can be used in the PRIMARY KEY of a 
WITHOUT ROWID table.


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Generated columns in primary keys, indexes, and constraints

2019-10-30 Thread Darren Duncan

On 2019-10-30 12:02 p.m., Simon Slavin wrote:

On 30 Oct 2019, at 6:56pm, Darren Duncan  wrote:


"Generated columns may not be used as part of the PRIMARY KEY. (Future versions of 
SQLite might relax this constraint for STORED columns.)"

Replace with this:

"VIRTUAL generated columns may not be used as part of the PRIMARY KEY."


What happens if the generated column is based on a calculation including rowid 
?  At the time the values are calculated, rowid might not yet be known.


I suggest one option is adding a restriction that rowid may not be used directly 
or indirectly in a STORED generated column and only in a VIRTUAL one.


Alternately, a longer term solution would be generate rowid as an independent 
prior step.  Like explicitly calling a sequence generator in one step and then 
using that as a then-constant input in composing the row.


While I can think of numerous uses of a stored generated column based on 
user-supplied values, I can't think of any use case for a generated column based 
on a rowid.


One way or another I believe this is surmountable and stored generated columns 
in a primary key is worth it.


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Generated columns in primary keys, indexes, and constraints

2019-10-30 Thread Darren Duncan

On 2019-10-30 3:12 a.m., Richard Hipp wrote:

On 10/30/19, Darren Duncan  wrote:


Ideally a PRIMARY KEY would have no restrictions that a UNIQUE constraint
lacks;
they are conceptually the same thing, a subset of the columns of the row
that
uniquely identifies the row in the table, and designating one as PRIMARY is
completely arbitrary in that sense.



For rowid-tables in SQLite, the PRIMARY KEY is really just a UNIQUE
constraint.  But that is kind of a bug that is retained for historical
compatibility - not something to be proud of.  Let's ignore that case
for the moment.

In a WITHOUT ROWID table, the PRIMARY KEY really is the key used by
the B-Tree algorithm to locate records on disk.  Hence, the PRIMARY
KEY really does need to be stored on disk.  Theoretically, one could
have the B-Tree algorithm itself compute the values of keys
on-the-fly.  But that is not possible in SQLite without significantly
rearchitecting the system.  Expressions are evaluated using multiple
opcodes in the bytecode engine, but B-Tree search is a single opcode.
It is unclear how the B-Tree engine could be convinced to run multiple
byte-code operations in response to each key comparison.

All that said, it wouldn't really be a problem use a STORED generated
column as a PRIMARY KEY, since the value of the STORED column is
sitting there on disk and is hence readily available to b-tree.  In
fact, that might just work if I merely remove the restriction.  But
using a VIRTUAL generated column as a PRIMARY KEY would be an issue.


Thank you.

If I wasn't clear, it was specifically the STORED variant I was advocating 
supporting with PRIMARY KEY, that provides all the benefits I looked for, which 
I felt in theory be as easy to implement as a normal column PRIMARY KEY because 
the value would be calculated before insertion.


So I request that the draft document be amended like as follows.

Take this line:

"Generated columns may not be used as part of the PRIMARY KEY. (Future versions 
of SQLite might relax this constraint for STORED columns.)"


Replace with this:

"VIRTUAL generated columns may not be used as part of the PRIMARY KEY."

... with optional comment about possibly being relaxed in the future.

-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] https://www.sqlite.org/draft/gencol.html Typo

2019-10-29 Thread Darren Duncan

On 2019-10-29 6:20 a.m., Simon Slavin wrote:

• Every table must have at least one non-generated column.


I greatly admire this restriction.


From a relational purist perspective that restriction is counter-productive.

But then, disallowing a table or a key/unique constraint from having exactly 
zero columns is likewise counter-productive and SQL already does that.


Allowing zero-column primary key constraints is the most elegant way to restrict 
a table to having not more than 1 row, useful for storing singleton data like 
some application configuration settings for example.


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Generated columns in primary keys, indexes, and constraints

2019-10-29 Thread Darren Duncan
Per https://www.sqlite.org/draft/gencol.html thank you very much for adding this 
generated columns feature!


I consider this to be a very powerful foundation for a variety of useful 
features, in particular because it empowers generalized solutions to several 
classes of problem.


In particular, this provides a generalized foundation for arbitrary unique 
constraints and indexes.


Some SQL DBMSs have complicated syntax for declaring indexes or keys, say for 
example case-insensitive uniqueness or indexing only a prefix of a character 
string or various other things.


When generalized generated columns are supported as SQLite is going to do, the 
syntax for key or constraint declaration can simply reference a column as a 
whole and use the entire pristine column value; by default this means key or 
index in a case-sensitive etc manner, but if there is a generated column that is 
a lowercased version of a regular text field, one can put the unique constraint 
on that column instead to get the case-insensitive uniqueness without 
complicating the key/index declaration syntax to do so.


There is a lot of power here from such a basic foundation.

Related to this, I strongly encourage the SQLite developers to relax the 
constraint on generated columns being used in a PRIMARY KEY sooner rather than 
later.


Ideally a PRIMARY KEY would have no restrictions that a UNIQUE constraint lacks; 
they are conceptually the same thing, a subset of the columns of the row that 
uniquely identifies the row in the table, and designating one as PRIMARY is 
completely arbitrary in that sense.


The benefits I ascribed to generated columns as a foundation would be greatly 
weakened if a PRIMARY KEY can't use them.


Thank you.

-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Roadmap?

2019-10-27 Thread Darren Duncan

On 2019-10-26 4:38 p.m., Thomas Kurz wrote:

The features you name don't take away from the "liteness", they are all quite

small and useful.

Yes of course they are useful, I wouldn't deny that. But they are prioritized 
over SQL-basics, that's what I'm confused about.


What do you mean by "SQL-basics"?

If you mean the list from your next post, there's very little basic about those 
and many are quite complicated.


I agree with adding more ALTER TABLE options but that's about it.

Omitting RIGHT JOIN is good, that's a misfeature and LEFT JOIN does everything 
useful it does.


Omitting SHOW TABLES or similar MySQL-only things is good, those are 
misfeatures, and querying INFORMATION_SCHEMA does everything better and in a 
more standard and composable way.


Anything to do with temporal and spatial is actually quite complicated, both 
data types and constraints, and omitting those is quite reasonable.


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why not allow subsecond file modification time in sqlar format?

2019-10-23 Thread Darren Duncan

On 2019-10-23 10:36 a.m., Richard Hipp wrote:

On 10/23/19, Peng Yu  wrote:

Hi,

https://www.sqlite.org/sqlar.html

Only integer is allowed for last modification time. Why not allow
subsecond file modification time in sqlar format, given subsecond time
is allowed in many OSes? Thanks.

   mtime INT,  -- last modification time


The datatypes for table columns in SQLite are suggestions, not
requirements.  The implementation could be enhanced to store the mtime
as a floating-point value, and everything would continue to work the
same.


Why float?  Shouldn't a standard 64-bit integer be able to represent all the 
subsecond time precision people would want? -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Roadmap?

2019-10-20 Thread Darren Duncan
Rowan, you're talking about Unicode codepoints; however, Unicode graphemes, what 
typical humans consider to be characters, are sequences of 1..N codepoints, 
example a letter plus an accent that get composed together, and this is what 
takes those large tables; this is related to Unicode Normal Forms, eg NFD vs 
NFC, and its not about codepoint encodings like UTF-8 vs UTF-16 etc. -- Darren 
Duncan


On 2019-10-20 8:03 p.m., Rowan Worth wrote:

On Sun, 20 Oct 2019 at 17:04, Simon Slavin  wrote:


Another common request is full support for Unicode (searching, sorting,
length()).  But even just the tables required to identify character
boundaries are huge.


Nitpick: there are no tables required to identify character boundaries. For
utf-8 you know if there's another byte to come which is part of the current
codepoint based on whether the current byte's high bit is set, and
furthermore you know how many bytes to expect based on the initial byte.

I'm less familiar with utf-16 which SQLite has some support for, but a
quick read suggests there are exactly two reserved bit patterns you need to
care about to identify surrogate pairs and thus codepoint boundaries.

Tables relating to collation order, character case, and similar codepoint
data can of course get huge, so your point stands.
-Rowan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Roadmap?

2019-10-20 Thread Darren Duncan

On 2019-10-20 12:53 a.m., Thomas Kurz wrote:

I'd kindly ask whether there is some sort of roadmap for SQLite development?

Someone recently pointed out how much he loves the "lite" and well-thought features. I cannot see 
that: I observe that many "playground" gadgets keep being implemented (like virtual columns, 
virtual tables, FTS3/4/5, ...), where one might wonder about their relationship to "Liteness", 
whereas other features, essential basics of the SQL standards, are still missing and there is no indication 
they are to be added.

Without wanting to offend someone, I cannot see the logic in development, so: 
Is there some kind of roadmap?


The features you name don't take away from the "liteness", they are all quite 
small and useful.


The main thing that provides the liteness is that SQLite is a single-user DBMS 
implemented as an embedded library, in contrast to being a server with multiple 
concurrent users.


What are the main missing "essential basics of the SQL standards" that you think 
can be added without compromising the "liteness"?


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why "UTF-8" and not 'UTF-8' ?

2019-09-23 Thread Darren Duncan

On 2019-09-23 1:22 p.m., Simon Slavin wrote:

Can I get an answer to my 'Subject' header ?

Is the parameter to the PRAGMA a string parameter ?  If so, can the 
documentation be changed to use apostrophes as delimiters ?  Using double 
quotes in the documentation seems really weird.

If, on the other hand, the PRAMGA is actually looking for double-quotes after 
the equals sign, is this for a reason ?


I agree.  All documentation should show the more standard/correct single quotes 
for strings when that works, and only show double quotes when those are 
required.  This would help avoid a lot of confusion. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] BLOB and TEXT comparisons

2019-07-13 Thread Darren Duncan
What's going on is that a Blob and a Text are logically different values and so 
can never possibly equal each other.


Also they aren't represented by the same bytes either, because every value 
includes its type and the comparison is comparing the bytes indicating the type 
as well, which differ.


-- Darren Duncan

On 2019-07-12 7:19 a.m., Charles Leifer wrote:

I ran into a somewhat surprising result and wanted to just get a little
clarification.

I'll use the following statement as an example:

SELECT SUBSTR(?, 1, 3) == ?

And the parameters will be:

* "abcde"
* "abc"

If I bind both parameters using the same type, the comparison returns True:

* sqlite3_bind_text
* sqlite3_bind_blob

However, if I mix the types, e.g. sqlite3_bind_text("abcde") and
sqlite3_bind_blob("abc") then the comparison returns False.

Fom a byte-to-byte perspective, this comparison should always return True.

What's going on?

Thanks!


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Upsert inside trigger?

2019-06-16 Thread Darren Duncan

On 2019-06-16 6:11 a.m., Adrian Ho wrote:

 From https://sqlite.org/lang_createtrigger.html :

*Syntax Restrictions On UPDATE, DELETE, and INSERT Statements Within
Triggers*

[...]

   * Common table expression are not supported for statements inside of
 triggers.


I wonder what the reason for this is, since that page doesn't seem to say.  Why 
there are any restrictions at all of this nature in triggers. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Should SQLite distinguish between +0.0 and -0.0 on output?

2019-06-15 Thread Darren Duncan

On 2019-06-12 6:35 a.m., Richard Hipp wrote:

IEEE754 floating point numbers have separate representations for +0.0
and -0.0.  As currently implemented, SQLite always display both
quantities as just "0.0".

Question:  Should SQLite be enhanced to show -0.0 as "-0.0"?  Or,
would that create unnecessary confusion?


I would say, either you support IEEE754 floats fully to the standard, or you 
don't pretend to support them and just say you have floats without mentioning 
IEEE754.  Also I say that distinguishing -0.0 and 0.0 is good for those that 
need to know and harmless to those that don't. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any ETA for 3.27?

2019-01-26 Thread Darren Duncan
If you look at the release history, major SQLite releases come out every 2-3 
months on average, and over the last year they weren't more than 3 months apart. 
 So by that pattern, 3.27 will probably come out not more than a month from 
now. -- Darren Duncan


On 2019-01-24 3:55 PM, Jens Alfke wrote:

Is there a rough idea of when 3.27 might be released?

We’d like to incorporate the fix for "Incorrect results for OR query where both 
OR terms are different indexed expressions”* into the next release of Couchbase 
Lite, but we’re uncomfortable with using an interim unblessed build of SQLite. If 
there were a SQLite release within the next month or so, that would be great for us.

—Jens

* https://www.sqlite.org/src/info/d96eba87698a428c1ddd0790ea04
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-17 Thread Darren Duncan

On 2018-12-17 9:16 AM, James K. Lowden wrote:

On Sat, 15 Dec 2018 01:24:18 -0800
Darren Duncan wrote:

If yours is a financial application then you should be using exact
numeric types only


Color me skeptical.  That very much depends on the application.  IEEE
double-precision floating point is accurate to within 15 decimal
digits.  The example given,


211496.26


gives, safely, a margin of 6 order of magnitude.  If the application is
taking sums of 100's of thousands of dollars, it will stay accurate to
within a penny using floating point until there are millions of
entries:

10^15 ÷ 10^8 = 10^7

I doubt the financial application exists that sums milliions of entries
AND cares about the last penny.


In modern days, getting to millions of financial transaction entries in a short 
time is common, and we still care about the last penny, I know from experience. 
-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-16 Thread Darren Duncan

On 2018-12-16 6:54 AM, Wout Mertens wrote:

I'd also like to point out a problem with integer money: inflation. For USD
it's been OK so far, but imagine having to handle the Zimbabwean Dollar,
which ended up having 100 trillion dollar notes. Good way to overflow your
integers.

With floating point, that's not a problem.


What you're talking about is NOT a problem with integer money.  Integers have 
unlimited precision, they are as large as you need them to be, there is no such 
thing as overflowing them.  When you're using a computer to represent the 
integers, you just use a data type capable of representing the largest integers 
you could ever possibly need to use for storage or intermediate calculations, if 
necessary a variable size representation such as BigInt or binary-coded-decimal. 
-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-16 Thread Darren Duncan

On 2018-12-15 2:15 AM, Frank Millman wrote:

On 2018-12-14 11:24 AM, Darren Duncan wrote:

If yours is a financial application then you should be using exact numeric types

only, such as integers that represent multiples of whatever quantum you are
using, such as cents; fractional numbers are a display or user input format
only, and in those cases they are character strings.




Thanks, Darren. In principle I agree with you, but I am experimenting with a 
different approach.
My application supports SQL Server and PostgreSQL as well as sqlite3, and those 
databases do have exact numeric types for monetary use, and I am trying to 
stick to one code base as much as possible.
The python sqlite3 module allows you to convert sqlite3 data to a python 
object, so my approach is to store decimal data as text in sqlite3, and convert 
it to a python Decimal object when reading it in. I find that this works ok. I 
do have a bit of trouble when using sqlite3 to ‘sum’ a column, as it then 
switches to floating point and can lose precision, but provided I convert the 
result back to a Decimal object with appropriate rounding it also works.

Having said that, I am still testing, and I may decide that I have to bite the 
bullet and store everything as integers, in which case I will use the same 
approach for the other databases as well.

Simon Slavin says ‘Currency amounts should be stored as integers’. Does this 
apply to sqlite3 specifically, or is that your recommendation for all databases?


I think that can reasonably apply to all DBMSs, even ones supporting exact 
fractional numeric types.  Keeping the money as integers everywhere for storage 
or where you do math, and character strings only for display to users, is a 
reasonable consistent strategy, that also scales more easily to handling 
multiple currencies.  Its not just about the DBMSs.  Some programming languages 
don't support exact fractional numbers either, and Javascript doesn't on the web 
client side if you do that. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about floating point

2018-12-15 Thread Darren Duncan
If yours is a financial application then you should be using exact numeric types 
only, such as integers that represent multiples of whatever quantum you are 
using, such as cents; fractional numbers are a display or user input format 
only, and in those cases they are character strings. -- Darren Duncan


On 2018-12-14 10:49 PM, Frank Millman wrote:

Hi all

I know that floating point is not precise and not suitable for financial uses. 
Even so, I am curious about the following -

SQLite version 3.26.0 2018-12-01 12:34:55
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open /sqlite_db/ccc
sqlite> select sum(amount_cust) from ar_trans where cust_row_id = 4 and 
tran_date between '2015-05-01' and '2015-05-31';
211496.26

Python 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit 
(AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.

import sqlite3
sqlite3.sqlite_version

'3.26.0'

conn = sqlite3.connect('/sqlite_db/ccc')
cur = conn.cursor()
cur.execute("select sum(amount_cust) from ar_trans where cust_row_id = 4 and 
tran_date between '2015-05-01' and '2015-05-31'")



cur.fetchone()

(211496.252,)

With the same version of sqlite3 and the same select statement, why does python 
return a different result from sqlite3.exe?

Thanks

Frank Millman

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Parallel reading can be slow on APFS

2018-10-30 Thread Darren Duncan
I find it strange that there's a big global lock problem with APFS considering 
one of the biggest design and selling points was that APFS did NOT use global 
locks like the old HFS+ did. -- Darren Duncan


On 2018-10-29 7:13 PM, Simon Slavin wrote:

This post is about a problem with Apple's new APFS file system.  The problem 
will affect you only if you have multiple reads/writes happening at the same 
time.  The problem involves merely slowing of performance, not corruption of 
databases or incorrect answers being returned by SQLite.

Gregory Szorc, expert on Mercurial (revision control and tracking software) 
investigated a report that Mercurial was performing slowly on new Macs, and 
deduced that the cause was that APFS uses a kernel-level global lock, not only 
on writes (expected) but also on reads when no writing was happening 
(unexpected).  Technical details can be found here:

<https://gregoryszorc.com/blog/2018/10/29/global-kernel-locks-in-apfs/>

The more parallel operations are trying to access the storage device, the 
slower it gets.

This post is intended to urge users to avoid premature optimization by parallel 
processing, and to consider that slow performance may not be SQLite's fault.  This is not 
a "Macs suck" post.  Please consider the following text from the article (which 
comes with relevant links):

"While this post is about APFS, this issue of global kernel locks during common I/O 
operations is not unique to APFS. I already referenced similar issues in AUFS. And I've 
encountered similar behaviors with Btrfs (although I can't recall exactly which 
operations). And NTFS has its own bag of problems."


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite mailing list [was: SQLite Windows GUI alternative to Excel?]

2018-10-11 Thread Darren Duncan

On 2018-10-10 1:36 PM, Eric wrote:

Too much overhead, how often must I clone ...


This makes me think that it would be useful, if it doesn't already, for Fossil 
to have something analogous to a database replication feature.  A bit like a 
mailing list but that the sender and client are both instances of Fossil. -- 
Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite mailing list [was: SQLite Windows GUI alternative to Excel?]

2018-10-11 Thread Darren Duncan

On 2018-10-10 12:26 PM, Keith Medcalf wrote:

And if you think that I am going to create YET ANOTHER LOGIN and YET ANOTHER 
PASSWORD just to use some crappy forum software, you have another think coming.


What do you think password managers are for?  Proper security means having a 
different password everywhere that uses passwords, and one presumably already 
has dozens or more of those, so if they use a password manager, the SQLite forum 
is just another one it automatically handles. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite mailing list [was: SQLite Windows GUI alternative to Excel?]

2018-10-11 Thread Darren Duncan

On 2018-10-10 10:51 AM, Chris Green wrote:

Warren Young  wrote:

Fossil forum email alerts include the full content of the message.


That's great!  Especially if the alert email subject includes the forum thread 
subject.


That said, I consider it critical that these alert emails can also send my own 
posts in the forum and not just others.  If they don't send for EVERY post, the 
emails aren't suitable for reading / backing up a thread in one place.



And can you then simply 'reply' from your E-Mail client?  If not then
it doesn't really help much.


Actually it helps a lot.  I think in practice most people using this forum would 
be reading a lot more than they post.  So you can do your majority action of 
reading in your email client with the forum alerts.  In the rare situation where 
you want to reply, then you just switch over to the web forum.  (And assuming 
you then get a copy back in your email, so its like you wrote it with email.)


-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite mailing list [was: SQLite Windows GUI alternative to Excel?]

2018-10-09 Thread Darren Duncan

On 2018-10-09 12:56 PM, Eric wrote:

On Tue, 9 Oct 2018 11:19:13 -0600, Warren Young  wrote:

Gmane is part of the problem that lead to the creation of the Fossil
forum feature.  Viz., it enables spammers, by design:

 http://gmane.org/about/


I don't see that page saying what you claim it says.


The only thing I saw along those lines was that gmane made it easy to harvest 
the email addresses of all the forum posters, which could lead to external spam, 
but I didn't see anything about it making things easier to send spam to the 
lists themselves, unless that's implied by something else. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The "natural" order of the query results.

2018-09-16 Thread Darren Duncan

On 2018-09-16 1:29 AM, John Found wrote:

Is there some relation between the indexes used in the query, the GROUP BY 
fields used
and the order of the result rows, when no "ORDER BY" clause is used?

I am asking, because I noticed, that on some queries, when I am using "ORDER 
BY" the query always
use temporary b-tree for ordering, but by including the needed fields in the "GROUP 
BY" clause
and removing the ORDER BY clause, the query returns the rows in the proper 
order without temp b-tree.

So, is it safe to use this implicit ordering, or this behavior can be changed 
in the future versions of SQLite?


The answer to your question, is no, never.  Using ORDER BY is the *only* safe 
way to guarantee results come in a particular order.  When you don't use ORDER 
BY you are explicitly saying that the order of result rows isn't significant and 
so the DBMS can use whatever order it feels like at any time without any prior 
notice of changes. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-25 Thread Darren Duncan

On 2018-08-24 11:58 PM, Thomas Kurz wrote:

What is the value of a built-in UNSIGNED type when we already have INTEGER?  I

can't think of any. -- Darren Duncan

Signed integers only allow half the range of values of unsigned ones. You 
cannot store a pointer value in them. (You can by casting to signed, but then 
sorting is done wrong.)


I fully agree with what others have said, which is that this use case is better 
handled by supporting an unlimited precision integer type.  All the "unsigned" 
request argues for is a single extra bit of precision for positive integers, 
which is like nit-picking; in practice if you can't fit all the integers you 
care about in a typical signed machine integer then the unlimited type is the 
only reasonable step up, for practical purposes. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unsigned

2018-08-24 Thread Darren Duncan

On 2018-08-20 11:46 PM, D Burgess wrote:

Is there a historical reason why sqlite does not have a UNSIGNED type
to go with INTEGER?


What is the value of a built-in UNSIGNED type when we already have INTEGER?  I 
can't think of any. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-21 Thread Darren Duncan

On 2017-11-21 6:30 AM, Richard Hipp wrote:

On 11/21/17, Paul Sanderson <sandersonforens...@gmail.com> wrote:

Coincidence!  I have just been in my gmail folder marking a load of SQLite
email as 'not spam'


I've been seeing mailing list emails go to spam for a while now.
Nothing has changed with MailMan.  I think what we are seeing is the
beginning of the end of email as a viable communication medium.

I really need to come up with an alternative to the mailing list.
Perhaps some kind of forum system.  Suggestions are welcomed.


If you do go the forum route, please choose one with an email interface so that 
one can choose to receive an email for each forum message (it doesn't have to 
support posting via email though), and download archives of messages, so we 
don't go backwards. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Shell tool allows creation of column name ""

2017-10-04 Thread Darren Duncan
I believe that as long as quoted identifiers are allowed, every value allowed as 
a regular character string should also be allowed as an identifier, including 
the empty string.  (Length limits notwithstanding.)


Some best practices may be against empty string names, but the system should 
allow it.  Also serves as a good "default" name, like the good default character 
string value would be empty or the good default number value would be zero.


I also believe that database tables with zero columns should be allowed, but 
that's a separate matter.


-- Darren Duncan

On 2017-10-04 9:51 AM, Simon Slavin wrote:

Given a .csv file which starts like this:

ID,code,name,sortOrder,,AlternativeName
1,RAH,Robert A. Heinlein,"Heinlein, Robert A.",Real Name,
2,IA,Isaac Asimov,"Asimov, Isaac",Real Name,
3,HH,Harry Harrison,"Harrison, Harry",,

Shell tool of this version

SQLite version 3.19.3 2017-06-27 16:48:08

Creates a table with the following columns:

CREATE TABLE Authors(
  "ID" TEXT,
  "code" TEXT,
  "name" TEXT,
  "sortOrder" TEXT,
  "" TEXT,
  "AlternativeName" TEXT
);

I don’t know the externally-governed rules.  I don’t know what rules the 
development team want to follow.  But I’m questioning whether it’s a good idea 
to create a column with that name.  If the dev team think it’s okay, that’s 
fine with me.

I would actually prefer it didn’t include the quotes signs in the column names. 
 They’re not in the .csv file.  But that’s a different matter.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding array parameter support?

2017-09-16 Thread Darren Duncan

On 2017-09-16 12:37 PM, Richard Hipp wrote:

On 9/16/17, Jens Alfke <j...@mooseyard.com> wrote:




On Sep 15, 2017, at 11:40 PM, Wout Mertens <wout.mert...@gmail.com> wrote:

This would be handy when specifying `INSERT INTO t(@?) VALUES(@?)` or
`SELECT
* FROM t WHERE v IN (@?)`.


I'm definitely not an expert on the SQLite virtual machine, but I believe
that changing these parameters [or at least the number of parameters] would
alter the opcodes generated for the statement. Which would require
recompiling the statement. So it wouldn't be feasible to make them bindings.



That is correct.

For the second case ("SELECT * FROM t WHERE v IN ?") you could use the
carray table-valued function (https://www.sqlite.org/carray.html).
But for the INSERT statement, a recompile would be necessary, making
that impractical to do with binding.


I would expect though that if the column list is constant (the INTO is written 
normally) then the VALUES clause should be replaceable with an array-valued 
parameter, and no recompile should be needed because what specific fields to 
expect is known when parsing the SQL. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding array parameter support?

2017-09-16 Thread Darren Duncan

On 2017-09-16 12:01 PM, Jens Alfke wrote:

On Sep 15, 2017, at 11:40 PM, Wout Mertens <wout.mert...@gmail.com> wrote:

This would be handy when specifying `INSERT INTO t(@?) VALUES(@?)` or `SELECT
* FROM t WHERE v IN (@?)`.


I'm definitely not an expert on the SQLite virtual machine, but I believe that 
changing these parameters [or at least the number of parameters] would alter 
the opcodes generated for the statement. Which would require recompiling the 
statement. So it wouldn't be feasible to make them bindings.


I agree with you.  The first use case, the INSERT, given by Wout is not feasible 
and misunderstands how things work.


The second use case however is entirely reasonable, saying "SELECT * FROM t 
WHERE v in ?" where that single bind parameter is an array.


Or, if a bind parameter could correspond to an entire tuple/row, which is 
reasonable, then the ? could only replace the entire "VALUES(...)" and not the 
field list for INTO, such as to say "INSERT INTO t ?".


Generally speaking, the only places where a parameter makes sense is anywhere a 
single value literal of any type makes sense, such as anywhere one can use a 
VALUES clause or whatever can go in the second argument for an IN.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Adding array parameter support?

2017-09-16 Thread Darren Duncan
I agree that being able to bind Array values is very useful, but this shouldn't 
need any new syntax in the SQL, the same ? should be acceptable; this shouldn't 
be any different than the distinction between binding a number vs text etc. -- 
Darren Duncan


On 2017-09-15 11:40 PM, Wout Mertens wrote:

I am wondering if the sqlite API could be changed to accommodate array
parameters, for example when using the literal `@?`.

This would be handy when specifying `INSERT INTO t(@?) VALUES(@?)` or `SELECT
* FROM t WHERE v IN (@?)`.

Apart from the ease of use benefit, I think this will help with prepared
statement reuse.

I'm not sure what the bind interface would look like, since each array
entry can have a different type. How about binding a value on the array
parameter index pushes onto the parameter array value? So to populate the
array value, call the bind functions on the same index, in array order.

It would also be nice to have a way to detect if the API supports array
parameters. How to do that, performing a query and see it fail, or is there
some function to request capabilities?

The literals could accommodate naming and numbering parameters with `@:AAA`
and `@?NNN` syntaxes.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite list spam from raypoker79

2017-09-12 Thread Darren Duncan

On 2017-09-12 6:56 PM, Darko Volaric wrote:

Is it possible that the spammer got a hold of the subscriber list?


My quick analysis of mail headers showed that the messages passed through the 
mailing list, or at least 3-4 mail relays in a row closest to my end were the 
same as for a legitimate message.  Also, the spam messages used and replied to 
messages sent only minutes before.  There could be a question of whether the 
mailing list validates that the sender of a message actually is the sender, or 
whether they let them in just because they say they are someone on the list. -- 
Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Common subexpression optimization of deterministic functions

2017-09-12 Thread Darren Duncan

On 2017-09-12 11:41 AM, Jens Alfke wrote:

On Sep 12, 2017, at 11:09 AM, Warren Young <war...@etr-usa.com> wrote:

From my reading of the docs, I don’t see that that is the purpose of 
SQLITE_DETERMINISTIC:

   https://www.sqlite.org/deterministic.html 
<https://www.sqlite.org/deterministic.html>


Actually it is. "A deterministic function always gives the same answer when it has the same 
inputs." That is the definition of a mathematical (also called "pure") function. 
Such a function call can of course be factored out as a common subexpression.


The purpose is simply so that the SQLite internals know whether it is safe to 
use the user-defined function in certain query types.


The reason SQLITE_DETERMINISTIC was added is to allow such functions to be used 
in indexes and then matched with the same function in a query. That allows 
indexing things like JSON properties (via the deterministic json_value 
function.)


Practically speaking any optimization to reduce actual calls to the 
deterministic function would have to be at compile time to rewrite the query to 
explicitly keep the result of the function and use it several times, which is 
someone users can also do by writing the query differently.


The fact is, any runtime-level smarts to prevent multiple calls to the function 
would have to involve creating and maintaining an index of inputs to outputs so 
that the DBMS knows whether the function was already called with particular 
inputs or not, so that would be an added complexity.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] marketing - The world's most popular open source database

2017-07-18 Thread Darren Duncan
I was reminded today that MySQL still prominently advertises themselves as "The 
world's most popular open source database", on their website and in their 
product announcements etc.


However, isn't that claim clearly wrong, given that SQLite for one has way more 
installations than MySQL does, and that's just for SQL DBMSs.


Is it worth having some kind of official statement from the makers of SQLite 
about this, that MySQL is using false advertising?


Or is the idea that SQLite has the most installations not easily provable?

-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] please remove me from this mailing list

2017-06-17 Thread Darren Duncan

Mike, look at the footer of every message to the list including yours.

Click the link 
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users in that 
footer; the "listinfo" part might have been a clue.


The resulting page has a field you enter your email address in to unsubscribe 
yourself.


-- Darren Duncan

On 2017-06-17 5:54 PM, Mike Henry wrote:

or tell me how to remove myself


On Sat Jun 17 2017 14:04:02 GMT-0500 (Eastern Standard Time), Mike Henry 
<mhenryd...@yahoo.com> wrote:

thanks
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RIGHT JOIN! still not supported?

2017-03-22 Thread Darren Duncan
Unless I misunderstand the desired result, this query would be better formulated 
using 2 left joins instead, like this:


  SELECT ...
  FROM Persons LEFT JOIN Pets ... LEFT JOIN PetAccessories ...

-- Darren Duncan

On 2017-03-22 2:22 AM, Chris Locke wrote:

An interesting discussion of it on StackOverflow...
http://stackoverflow.com/questions/689963/does-anyone-use-right-outer-joins

To give one example where a RIGHT JOIN may be useful.

Suppose that there are three tables for People, Pets, and Pet Accessories.
People may optionally have pets and these pets may optionally have
accessories.

If the requirement is to get a result listing all people irrespective of
whether or not they own a pet and information about any pets they own that
also have accessories.


All in all probably easiest to use a RIGHT JOIN

SELECT P.PersonName,
   Pt.PetName,
   Pa.AccessoryName
FROM   Pets Pt
   JOIN PetAccessories Pa
 ON Pt.PetName = Pa.PetName
   RIGHT JOIN Persons P
 ON P.PersonName = Pt.PersonName;

Though if determined to avoid this another option would be to introduce a
derived table that can be left joined to.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RIGHT JOIN! still not supported?

2017-03-21 Thread Darren Duncan
What benefit does a RIGHT JOIN give over a LEFT JOIN?  What queries are more 
natural to write using the first rather than the second?


While I can understand arguments based on simple mirror parity, eg we have < so 
we should have > too, lots of other operations don't have mirror syntax either.


-- Darren Duncan

On 2017-03-21 8:42 AM, Daniel Kamil Kozar wrote:

Seeing how SQLite was created in 2000, it seems like nobody really
needed this feature for the last 17 years enough in order to actually
implement it.

Last I heard, patches are welcome on this mailing list. Don't keep us waiting.

Kind regards,
Daniel

On 20 March 2017 at 21:09, PICCORO McKAY Lenz <mckaygerh...@gmail.com> wrote:

i got this

Query Error: RIGHT and FULL OUTER JOINs are not currently supported
Unable to execute statement

still today in 21 ts century?

Lenz McKAY Gerardo (PICCORO)
http://qgqlochekone.blogspot.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Thread safety of serialized mode

2017-02-17 Thread Darren Duncan

On 2017-02-14 5:05 PM, Darren Duncan wrote:

On 2017-02-14 4:46 PM, Richard Hipp wrote:

 This is yet another reason why I say "threads are evil".  For
whatever reason, programmers today think that "goto" and pointers and
assert() are the causes of all errors, but threads are cool and
healthful.  Entire programming languages are invited (I'm thinking of
Java) to make goto and pointers impossible or to make assert()
impossible (Go) and yet at the same time encourage people to use
threads.  It boggles the mind 


There is nothing inherently wrong with threads in principle, just in how some
people implement them.  Multi-core and multi-CPU hardware is normal these days
and is even more the future.  Being multi-threaded is necessary to properly
utilize the hardware, or else we're just running on a single core and letting
the others go idle.  The real problem is about properly managing memory.  Also
giving sufficient hints to the programming language so that it can implicitly
parallelize operations.  For example, want to filter or map or reduce a relation
and have 2 cores, have one core evaluate half the tuples and another evaluate
the other half, and this can be implicit simply by declaring the operation
associative and commutative and lacking of side-effects or whatever. -- Darren
Duncan


Based on the responses I have seen, I think a lot of people have misunderstood 
what I was trying to say here.


When I said "threads", I was meaning that term in the most generic sense 
possible, in the same way that "concurrency" is generic.  My saying 
"multi-threaded" is saying to use appropriate tools, which come in a variety of 
names, so that you permit your workload to be spread over multiple CPUs or CPU 
cores at once, rather than constraining it to be run serially in a single core.


I was never advocating using a specific mechanism like a C language "thread".

-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Thread safety of serialized mode

2017-02-15 Thread Darren Duncan

On 2017-02-15 2:40 AM, Clemens Ladisch wrote:

Cecil Westerhof wrote:

2017-02-15 8:58 GMT+01:00 Clemens Ladisch <clem...@ladisch.de>:

Threading is the most extreme method of achieving parallelism, and
therefore should be used only as the last resort.  (I'd compare it to
assembly code in this regard.)


​At the moment I am not using it much and I am certainly not an expert, but
as I understood it one of the reasons to use threading is that it costs a
lot less resources.


And just like with assembly code, you also have to count the time spent
writing it, and debugging the result.


Also, its a long time since hand-writing assembly code was any good for 
performance, unless you're a 1% top expert with a good reason.


If you want speed, write in C or something else that isn't assembly.  The odds 
are like 99% that the modern C compiler will generate faster code than you could 
ever write yourself in assembly, and it will be much less buggy.


Similarly with threads, for the vast majority of people, using other concurrency 
models with supported languages are better; they will still get the performance 
benefit of using multiple CPU cores but do it much more safely than if you are 
explicitly using "threads" in code.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Thread safety of serialized mode

2017-02-14 Thread Darren Duncan

On 2017-02-14 4:46 PM, Richard Hipp wrote:

 This is yet another reason why I say "threads are evil".  For
whatever reason, programmers today think that "goto" and pointers and
assert() are the causes of all errors, but threads are cool and
healthful.  Entire programming languages are invited (I'm thinking of
Java) to make goto and pointers impossible or to make assert()
impossible (Go) and yet at the same time encourage people to use
threads.  It boggles the mind 


There is nothing inherently wrong with threads in principle, just in how some 
people implement them.  Multi-core and multi-CPU hardware is normal these days 
and is even more the future.  Being multi-threaded is necessary to properly 
utilize the hardware, or else we're just running on a single core and letting 
the others go idle.  The real problem is about properly managing memory.  Also 
giving sufficient hints to the programming language so that it can implicitly 
parallelize operations.  For example, want to filter or map or reduce a relation 
and have 2 cores, have one core evaluate half the tuples and another evaluate 
the other half, and this can be implicit simply by declaring the operation 
associative and commutative and lacking of side-effects or whatever. -- Darren 
Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE and the like and SIMilarity

2017-01-11 Thread Darren Duncan

On 2017-01-11 10:11 AM, Jens Alfke wrote:

And while we’re at it, I’d like to see a case-insensitive string equality 
operator.


Yes, that shorthand can be useful.  But don't make it a pragma that overrides 
the meaning of "=", which would be a world of hurt, it needs a different name. 
-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatype for prices (1,500)

2016-12-01 Thread Darren Duncan

Look, you want to store the same level of detail that a decimal(7,4) does?

Easy, you just multiply the conceptual number by 10,000 and it represents 
hundredths of a cent, the exact same precision you are using in MySQL.


Your examples would then be stored as 20 or 8 respectively.  And every other 
possible value you could store in the MySQL you can now store in SQLite, 
consistently.


-- Darren Duncan

On 2016-12-01 12:08 AM, Werner Kleiner wrote:

As I can see storing prices is a topic with different ways and
different solutions.

The advice to store prices in Cent or Integer:
Yes you can do: but how will you sore hundredth cents amounts or tenth
cent prices?
I have prices like 0,0020 or 0,0008 Euro

I think I have to manipulate the prices for viewing in the app with PHP.
Fact is: we can have to DBs MySQL and SQlite. MySQL with decimal(7,4)
stores a price 1.500 from a textfield exact so.
If you want to show the price again in the app, there is nothing to do.
But switching to SQLite the price is viewed as 1.5 (and stored)
I know this is no error of SQLite. But I have now to differ between
Sqlite and MySQL and have to optimize the SELECT and adding 00
programmatically to view correct if using Sqlite.

My original post was if there is a way for both DBs with same
behavior, but it seems not.

Thanks to all for help.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatype for prices (1,500)

2016-11-30 Thread Darren Duncan

On 2016-11-30 3:14 PM, Keith Medcalf wrote:



On Wednesday, 30 November, 2016 17:58, Darren Duncan <dar...@darrenduncan.net> 
said:


On 2016-11-30 2:43 PM, Keith Medcalf wrote:



You were given a good recommendation save everything in "cents". Which
might also be a good solution depending on the underlying language you
use. as you can't store money in a float!


And why can you not store money in a float?


Unlike say measured data like temperature or rainfall, which are
inherently
inexact, money is not measured and is considered exact information, at
least in
typical cases of recording transactions, so needs to be represented in an
exact
format.  You don't want your bank balance to be changing by different
values
than the exact amount you insert or withdraw, do you? -- Darren Duncan


I do not follow.

There is no reason that money cannot be stored as IEEE754 floating point 
(32-bit format lesser formats such as fast-float are more problematic than 
64-bit or longer binary, but can easily handle more than 10 times all the money 
every created), and it can certainly be stored in IEEE854 floating point.

Why exactly do you think there is a problem?  Or is it really just a generally 
observed wattage problem (in the understanding of numbers and arithmetic and 
how computers work).


The problem is that IEEE floats are inexact.  Even with 64-bit versions, summing 
a lot of money values is eventually going to create rounding errors when 
conceptually summing should always produce exact results, which we care about 
when dealing with money. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Datatype for prices (1,500)

2016-11-30 Thread Darren Duncan

On 2016-11-30 2:43 PM, Keith Medcalf wrote:



You were given a good recommendation save everything in "cents". Which
might also be a good solution depending on the underlying language you
use. as you can't store money in a float!


And why can you not store money in a float?


Unlike say measured data like temperature or rainfall, which are inherently 
inexact, money is not measured and is considered exact information, at least in 
typical cases of recording transactions, so needs to be represented in an exact 
format.  You don't want your bank balance to be changing by different values 
than the exact amount you insert or withdraw, do you? -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-11-06 Thread Darren Duncan

On 2016-11-06 3:50 AM, vfclists . wrote:

On 8 January 2016 at 01:47, Stephen Chrzanowski <pontia...@gmail.com> wrote:

I personally wish the reverse.  I wish that these interpreted language
engines would incorporate the SQLite code directly into their own existence
to avoid having to write wrappers to begin with, except for those wrappers
where their method name is "DatabaseOpen" and I prefer "OpenDatabase".

SQLite has been around for years, and "R", PHP, Java, Perl, and all these
other interpreted new and old style languages have never bothered to
incorporate this public domain database engine within itself.  It isn't
like the maintainers of these languages don't know it doesn't exist, and if
they didn't, then my god they gotta get out from under that rock.  Most web
browsers use SQLite for crying out loud.

For a few years, I've considered taking the entire amalgamation and porting
it to Pascal (Delphi/FPC) so I have exactly zero reliance on DLLs.  No
worries about OBJ files, no worries about dependencies, I just include a
unit and my app is now database aware.  I know 386 assembly, and I can
always read up on other specifications if I needed to.  My problem is that
gaming gets in the way.

My 2016 wish list for SQLite is that all developers who write for, or use
directly or indirectly, any database engine out on the market has a safe
and happy 2016 and beyond.


Haven't the mORMot guys already done this -
http://synopse.info/fossil/wiki?name=SQLite3+Framework? I think they have a
means of compiling sqlite access directly into Delphi and Freepascal.


If anyone feels like replying to the quoted message or thread starter, dated 
2016 January 7 PST, please first look at the SQLite list archives for Jan 7-8 
first as a discussion already occurred then.  I personally think the subject has 
already run its course. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a best practice for breaking up a large update?

2016-10-15 Thread Darren Duncan
You didn't say if the other tasks need write access to the database or if it is 
just read-only.  If the others only need read-only, let them access a copy of 
the database while you make your changes in another copy, then just swap the 
databases when done. -- Darren Duncan


On 2016-10-15 1:21 PM, Kevin O'Gorman wrote:

I'm new to this, and working in Python's sqlite3.  So be patient, and don't
expect me to know too much.  This is also a personal hobby, so there's
nobody else for me to ask.

I've got a database of a some tens of millions of positions in a board
game.  It may be over a billion before I'm done (working in an 11-TB
partition at the moment.) I want to process a major slice of these records,
in order to update other records.  I might want to break the work up
into chunks to allow other access to the database while this is going on.

So I have some questions:
(1) If I do all of my updates to a temporary table, does the database still
get locked?

(2) Is there another way to keep it available?  It happens for this
activity that consistency is not at risk.

(3) If it turns out that I can avoid locking, it there still a performance
reason to break the transaction into chunks, or would I be as well off
doing it as a single transaction (assuming I have disk space for the
journal).

(4) If I break it up into chunks, I can think of several ways to do that
and keep track of what's been done and what has not.  Is there a best
practice for this?

(5) Is there a forum specifically for folks doing Python database
programming?  It occurs to me that much of what I'm asking about is not
specific to SQLite.  But my code, for now at least, is going to be in
Python because it's the easiest way I can see, and my own efforts are the
real bottleneck in making progress.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Protecting databases

2016-10-08 Thread Darren Duncan
Replying to myself.  This being said, going for the application-level encryption 
option would prevent you from using SQLite to do some useful things, such as 
being able to do a substring search for text in encrypted fields, since 
encrypted data is just a black box to it.  Typically the application-level 
solution is just encrypting a minimum number of fields, such as credit card 
numbers or SINs or passwords etc, that wouldn't be searched except for a whole 
value match.  To use SQLite normally as if it weren't encrypted but with it 
actually encrypted, you need the SEE or similar for that. -- Darren Duncan


On 2016-10-08 12:31 AM, Darren Duncan wrote:

So, if you don't want to pay the one-time fee for the SQLite Encryption
Extension et al to get database-level security, your only option really is to
encrypt individual fields at the application level that you want to protect, and
there are various free encryption libraries you can use for that, the specific
options depending on your choice of programming language.  But using those has
nothing to do with SQLite specifically, so your answer wouldn't be found on this
SQLite forum, but rather forums for your programming language. -- Darren Duncan

On 2016-10-08 12:18 AM, Damien Sykes-Lindley wrote:

Hi Darren,
You are correct in that genealogy is generally public. However more often than
not the information you want to publish may very well differ from what is in
your private database. You may have private notes telling you what you need to
do. You may have anecdotes shared by many family members that may need to be
kept private, at least until the involved parties are deceased or otherwise
choose to divulge it publicly themselves.
Even more importantly I may choose to add an address-book style feature in there
so you can easily group and contact appropriate family members for whatever
reason (special occasions etc). Of course that will be private.
Password protecting it is also good on many levels - if the database is to be
used online then it is needless to say that authentication would be required for
various people to view it. Even if I decide to make it local only, there is the
possibility that anyone sharing the computer or network may peruse the database
when you don't want them to.
Kind regards,
Damien.
-Original Message- From: Darren Duncan
Sent: Saturday, October 08, 2016 6:54 AM
To: SQLite mailing list
Subject: Re: [sqlite] Protecting databases

On 2016-10-07 10:46 PM, Damien Sykes-Lindley wrote:

Hi there,
My name is Damien Lindley, and I am, among other things, an independent,
hobbiest programmer. I have been blind since birth and thus all my computer
work relies on screenreader software and keyboard.
I have only just come through the brink of scripting into compiled programming
and so I guess I am still a beginner in many respects. However I don’t work in
C or C++, so most of my programming, if using a library, relies on precompiled
static or dynamic libraries. Or of course libraries that are written or
converted specifically for the language I work in (FreeBASIC).
Recently, I decided I needed to create a piece of software that could manage
family trees, since there seems to be a lack of screenreader accessible
genealogy managers out there. I was advised the best way to do this is to use
a database engine. I was also informed that SQLite is always a good choice for
databases.
I must admit, I have never worked with databases before and so now I am in the
process of learning SQL. However looking at the programming API for SQLite I
cannot see any means of password protecting the database without either buying
a commercial extension to do this, or recompiling SQLite with the
authentication extension. Due to financial constraints and unfamiliarity with
compiling in C both of these are not an option for me. Also I need a secure
way to do this, as I think I read that the SQLite version simply uses a table
to store the user data, which of course can be read and accessed elsewhere.
Are there any other options available for doing this?
Any help appreciated.
Thanks.
Damien.


Damien,

Why do you need to password protect the database?

Genealogy information is generally of the public record variety so there is
nothing sensitive to protect.  I am making genealogy software myself and so am
familiar with many of the relevant issues.

I would say please explain why you think you need password protection for this
project and then the real issue at hand can be addressed.

If yours is a network application and you don't want people on the open internet
from accessing the database, fair enough, but that's an application-level
solution; what you're asking for here is that people who have direct access to
the SQLite database file are blocked by a password, and this I question.

-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman

Re: [sqlite] Protecting databases

2016-10-08 Thread Darren Duncan
So, if you don't want to pay the one-time fee for the SQLite Encryption 
Extension et al to get database-level security, your only option really is to 
encrypt individual fields at the application level that you want to protect, and 
there are various free encryption libraries you can use for that, the specific 
options depending on your choice of programming language.  But using those has 
nothing to do with SQLite specifically, so your answer wouldn't be found on this 
SQLite forum, but rather forums for your programming language. -- Darren Duncan


On 2016-10-08 12:18 AM, Damien Sykes-Lindley wrote:

Hi Darren,
You are correct in that genealogy is generally public. However more often than
not the information you want to publish may very well differ from what is in
your private database. You may have private notes telling you what you need to
do. You may have anecdotes shared by many family members that may need to be
kept private, at least until the involved parties are deceased or otherwise
choose to divulge it publicly themselves.
Even more importantly I may choose to add an address-book style feature in there
so you can easily group and contact appropriate family members for whatever
reason (special occasions etc). Of course that will be private.
Password protecting it is also good on many levels - if the database is to be
used online then it is needless to say that authentication would be required for
various people to view it. Even if I decide to make it local only, there is the
possibility that anyone sharing the computer or network may peruse the database
when you don't want them to.
Kind regards,
Damien.
-Original Message- From: Darren Duncan
Sent: Saturday, October 08, 2016 6:54 AM
To: SQLite mailing list
Subject: Re: [sqlite] Protecting databases

On 2016-10-07 10:46 PM, Damien Sykes-Lindley wrote:

Hi there,
My name is Damien Lindley, and I am, among other things, an independent,
hobbiest programmer. I have been blind since birth and thus all my computer
work relies on screenreader software and keyboard.
I have only just come through the brink of scripting into compiled programming
and so I guess I am still a beginner in many respects. However I don’t work in
C or C++, so most of my programming, if using a library, relies on precompiled
static or dynamic libraries. Or of course libraries that are written or
converted specifically for the language I work in (FreeBASIC).
Recently, I decided I needed to create a piece of software that could manage
family trees, since there seems to be a lack of screenreader accessible
genealogy managers out there. I was advised the best way to do this is to use
a database engine. I was also informed that SQLite is always a good choice for
databases.
I must admit, I have never worked with databases before and so now I am in the
process of learning SQL. However looking at the programming API for SQLite I
cannot see any means of password protecting the database without either buying
a commercial extension to do this, or recompiling SQLite with the
authentication extension. Due to financial constraints and unfamiliarity with
compiling in C both of these are not an option for me. Also I need a secure
way to do this, as I think I read that the SQLite version simply uses a table
to store the user data, which of course can be read and accessed elsewhere.
Are there any other options available for doing this?
Any help appreciated.
Thanks.
Damien.


Damien,

Why do you need to password protect the database?

Genealogy information is generally of the public record variety so there is
nothing sensitive to protect.  I am making genealogy software myself and so am
familiar with many of the relevant issues.

I would say please explain why you think you need password protection for this
project and then the real issue at hand can be addressed.

If yours is a network application and you don't want people on the open internet
from accessing the database, fair enough, but that's an application-level
solution; what you're asking for here is that people who have direct access to
the SQLite database file are blocked by a password, and this I question.

-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Protecting databases

2016-10-07 Thread Darren Duncan

On 2016-10-07 10:46 PM, Damien Sykes-Lindley wrote:

Hi there,
My name is Damien Lindley, and I am, among other things, an independent, 
hobbiest programmer. I have been blind since birth and thus all my computer 
work relies on screenreader software and keyboard.
I have only just come through the brink of scripting into compiled programming 
and so I guess I am still a beginner in many respects. However I don’t work in 
C or C++, so most of my programming, if using a library, relies on precompiled 
static or dynamic libraries. Or of course libraries that are written or 
converted specifically for the language I work in (FreeBASIC).
Recently, I decided I needed to create a piece of software that could manage 
family trees, since there seems to be a lack of screenreader accessible 
genealogy managers out there. I was advised the best way to do this is to use a 
database engine. I was also informed that SQLite is always a good choice for 
databases.
I must admit, I have never worked with databases before and so now I am in the 
process of learning SQL. However looking at the programming API for SQLite I 
cannot see any means of password protecting the database without either buying 
a commercial extension to do this, or recompiling SQLite with the 
authentication extension. Due to financial constraints and unfamiliarity with 
compiling in C both of these are not an option for me. Also I need a secure way 
to do this, as I think I read that the SQLite version simply uses a table to 
store the user data, which of course can be read and accessed elsewhere.
Are there any other options available for doing this?
Any help appreciated.
Thanks.
Damien.


Damien,

Why do you need to password protect the database?

Genealogy information is generally of the public record variety so there is 
nothing sensitive to protect.  I am making genealogy software myself and so am 
familiar with many of the relevant issues.


I would say please explain why you think you need password protection for this 
project and then the real issue at hand can be addressed.


If yours is a network application and you don't want people on the open internet 
from accessing the database, fair enough, but that's an application-level 
solution; what you're asking for here is that people who have direct access to 
the SQLite database file are blocked by a password, and this I question.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.15.0 scheduled for 2016-10-14

2016-09-22 Thread Darren Duncan

On 2016-09-22 12:16 PM, Petite Abeille wrote:



On Sep 22, 2016, at 9:04 PM, Richard Hipp <d...@sqlite.org> wrote:

(https://www.sqlite.org/draft/releaselog/3_15_0.html).


Oh! Row Values! Nice! :)

https://www.sqlite.org/draft/rowvalue.html


I second that, its a valuable feature to have.

The purist in me thinks it should also be possible to have rows with exactly 1 
or zero elements also, not just 2+ as described above.


Syntactically, () could be a zero-element row, and distinguishing a 
single-element row could be done with say a trailing comma; eg "(42,)" is a 
single-element row while "(42)" is simply the value 42 since parens are also 
used for forcing evaluation precedence of expressions.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Does sqlite3 have variables like Oracle?

2016-09-11 Thread Darren Duncan
You can also access that value within your Perl code, in a DBMS-agnostic 
fashion, with the appropriate DBI routine:


http://search.cpan.org/dist/DBI/DBI.pm#last_insert_id

-- Darren Duncan

On 2016-09-11 2:59 PM, mikeegg1 wrote:

I forgot I could use it inside the shell. Thanks.


On Sep 11, 2016, at 16:58, Simon Slavin <slav...@bigfraud.org> wrote:
On 11 Sep 2016, at 10:53pm, mikeegg1 <mikee...@mac.com> wrote:


Thanks. I thought not. I’m doing this from PERL on a Mac and don’t know if I 
can fully access last_row_id().


It's not a C function, it's a function you can use inside SQL commands.  There 
shouldn't be a problem with it.

INSERT INTO myTable VALUES ('Hastings',17);
UPDATE anotherTable SET theRowId = last_row_id() WHERE placeName = 'Hastings';


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-05 Thread Darren Duncan

On 2016-09-05 1:55 PM, Richard Hipp wrote:

Most of the world views the internet on their phone now, I am told,
and websites are suppose to be "responsive", meaning that they
reformat themselves to be attractive and useful for the majority who
view them through a 320x480 pixel soda-straw.  In an effort to conform
to this trend, I have made some changes to the *draft* SQLite website
(http://sqlite.org/draft) Your feedback on these changes is
appreciated.  Please be sure to try out the new design both on a
narrow-screen phone and on a traditional desktop browser.  The goal is
to provide a more mobile-friendly website without reducing the
information content available to desktop users.


Superficially the altered site looks like an improvement.  It uses the simple 
and standard "viewport" declaration to achieve the low-hanging fruit.  Text is 
readable on my phone as with my computer at its default size while loading. 
Site still seems navigatable.  I didn't go very far though. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] web interface down

2016-08-30 Thread Darren Duncan

On 2016-08-30 2:22 PM, Wolfgang Enzinger wrote:

I hope this is the right place ...

The web interface for this mailing list
(http://news.gmane.org/gmane.comp.db.sqlite.general) seems to be down for
quite a while now, I'm getting timeouts constantly. The NNTP interface,
however, works fine. Anyone can do anything about it?


As was in the news a month back, Gmane shut down its web interface.

Hopefully every list that relied on it also has other archives.

-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] newbie has waited days for a DB build to complete. what's up with this.

2016-08-05 Thread Darren Duncan

On 2016-08-04 7:27 AM, Jim Callahan wrote:

Steps
Agree with Darren Duncan and Dr. Hipp you may want to have at least 3
separate steps
(each step should be a separate transaction):

1. Simple load
2. Create additional column
3. Create index

Have you pre-defined the table you are loading data into? (step 0 CREATE
TABLE)

If "Step 1 Simple Load" does not complete; then may want to load a fixed
number of rows into separate tables (per Darren Duncan)  and then combine
using an APPEND
or a UNION query (doing so before steps 2 and 3).


To be clear, my proposal of only loading a subset of rows was just intended for 
debugging the performance issues the OP was having and try and figure out what 
is causing the problem without having to wait too long for larger sets to 
complete.  I wasn't proposing splitting the load into separate databases and 
unioning later, rather the attempt database would be thrown away after each 
trial. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] newbie has waited days for a DB build to complete. what's up with this.

2016-08-03 Thread Darren Duncan
One way to get a clue is to try doing this in stages.  First start over and 
import a much smaller amount of data, say just a 1GB fraction say, see if that 
completes, and if it does, how long it takes and other factors like disk and 
memory etc.  If 1GB doesn't work, start smaller yet, until you have a size that 
does work.  When something works, next try something double the size and see if 
the resource usage is about linear or not.  And double it again, etc, and see if 
you can get a time/space progression figures.  That should help you predict how 
long the full 187GB would take were it successful.  Or otherwise at some point 
you should see the smallest point where the hang occurs. -- Darren Duncan


On 2016-08-03 8:00 PM, Kevin O'Gorman wrote:

I'm working on a hobby project, but the data has gotten a bit out of hand.
I thought I'd put it in a real database rather than flat ASCII files.

I've got a problem set of about 1 billion game positions and 187GB to work
on (no, I won't have to solve them all) that took about 4 hours for a
generator program just to write.  I wrote code to turn them into something
SQLite could import.  Actually, it's import, build a non-primary index, and
alter table to add a column, all in sqlite3.

The database was growing for about 1-1/2 days.  Then its journal
disappeared, the file size dropped to zero, but sqlite3 is still running
100% CPU time, now for a total of 3800+ minutes (63+ hours).  The database
is still locked, but I have no idea what sqlite3 is doing, or if it will
ever stop.  All partitions still have lots of space left (most of this is
running in a RAID partition of 11 TiB).  Here's what I gave to sqlite3 on
my Linux system:


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The upcoming "pi" release of SQLite

2016-07-23 Thread Darren Duncan

On 2016-07-23 8:16 AM, Richard Hipp wrote:

The next release of SQLite will be the "pi" release - version 3.14.
It will probably occur within the next two weeks.

Draft change log:  https://www.sqlite.org/draft/releaselog/3_14_0.html

Code snapshot: 
https://www.sqlite.org/snapshot/sqlite-snapshot-201607230522.tar.gz

Testing and (especially) documentation work is on-going.  Please try
out the snapshot.  Look over the changes.  Speak up loudly and quickly
if you have any issues.  If you do not want to post to this mailing
list, you can send feedback directly to my email address shown below.


This looks to be a great release, thanks.  In particular I found the new virtual 
tables and table valued functions features interesting.


I have a question/request:  I didn't see any mention in the release notes about 
how the SQLite C include guards were changed to better conform with the 
standard.  Could you include an item in the release notes about this?  That 
would be a useful thing to draw the attention of actual or potential users in 
stricter environments where it matters.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug: SQLite's include guards are reserved identifiers

2016-07-14 Thread Darren Duncan

On 2016-07-12 9:32 PM, Cory Nelson wrote:

On Fri, Jul 8, 2016 at 6:56 PM, Daniel Seither <p...@tiwoc.de> wrote:

Using clang 3.8 with -Wreserved-id-macro (enabled by -Weverything), I
just noticed that SQLite uses include guards with a leading underscore,
for example _SQLITE3_H_ in the amalgamation. According to the C
standard, this is a reserved identifier, leading to undefined behavior:


While uncommon, it is not unheard of for apps to use include guards to
check for a library's existence. While it is indeed not in compliance,
in practice fixing this may affect existing SQLite users.


My recommendation on this matter is to deprecate but retain the current include 
guards that are in violation of the standard, and ALSO add new, redundant 
include guards in a manner that comply with the standard.


Then update or add relevant documentation for application developers 
recommending that they should be checking the new include guard names so that 
their code would keep working properly should the old guard names ever be 
removed in the future.


By doing that, existing users and unchanged codebases of SQLite should not see 
anything break, but users would also have the opportunity to be proactive and 
apply the minor changes necessary for their code to continue working with future 
versions of SQLite that might remove the old include guards.


If the SQLite developers agree with this, then the main outstanding questions 
then are what to name the new include guards and how best to document the change.


Thoughts on that proposal?

-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Currency & SQLite

2016-07-05 Thread Darren Duncan

Ertan,

You should read https://www.sqlite.org/datatype3.html#section_2 if you haven't 
already.


SQLite has exactly 1 floating-point type, which is what you get no matter if you 
say float/double/etc, and it is 8 bytes, which corresponds to a Double in most 
programming languages.


You can't change what SQLite uses other than explicitly not using a 
floating-point type; you have integer, float, blob, text, and that's it.


Therefore, if you use anything other than a double in your app, you will be 
converting when storing or retrieving, which makes things slower and introduces 
loss, hence I say use Double everywhere.


-- Darren Duncan

On 2016-07-05 4:34 PM, Ertan Küçükoğlu wrote:

Hardware that the application will run is ~1Ghz Atom CPU, ~1GB RAM, regular Laptop HDD 
(no SSD). Time to time, there will be calculations, Network file transfer, and DLL 
function calls (all three) will be simultaneously running. Application has Timers and 
Socket listening components on it. My concern is to have a situation which 
"seems" like an application freeze.

On the other hand, will it be any help to increase accuracy to use Currency (8 
bytes fixed-point data type) in my application for Price and Total fields, 
leave Amount field as Double?

Thanks.

-Original Message-
From: sqlite-users-boun...@mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of Darren Duncan
Sent: Wednesday, July 6, 2016 1:20 AM
To: SQLite mailing list <sqlite-users@mailinglists.sqlite.org>
Subject: Re: [sqlite] Currency & SQLite

On 2016-07-05 3:02 PM, Ertan Küçükoğlu wrote:

I am using Delphi 10 for programming. Current variables are defined as Double (8 bytes 
real), saved in Database using something "DataSet.FieldByName('Amount').AsFloat := 
My_double_variable" However, I am now considering to convert all Double datatypes to 
Single (4 bytes real) for performance issues.


Do NOT change to using Single, always use Double, end to end.

First of all, Double is what SQLite uses internally for floats, per its
documentation:

"The value is a floating point value, stored as an 8-byte IEEE floating point 
number."

So if you're already using Double in your app then staying that way should 
maintain accuracy, especially when you want accuracy for more than say 3-5 
decimal digits in total.

Secondly, for financial applications, accuracy trumps performance.  Converting 
to Single at any time is just going to cause you trouble.

And really, are you sure using Single would actually help performance?  Have 
you measured it in a real workload?  More likely on modern devices it won't be 
any faster and you will have lost your accuracy for nothing.

-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Currency & SQLite

2016-07-05 Thread Darren Duncan

On 2016-07-05 3:02 PM, Ertan Küçükoğlu wrote:

I am using Delphi 10 for programming. Current variables are defined as Double (8 bytes 
real), saved in Database using something "DataSet.FieldByName('Amount').AsFloat := 
My_double_variable" However, I am now considering to convert all Double datatypes to 
Single (4 bytes real) for performance issues.


Do NOT change to using Single, always use Double, end to end.

First of all, Double is what SQLite uses internally for floats, per its 
documentation:


  "The value is a floating point value, stored as an 8-byte IEEE floating point 
number."


So if you're already using Double in your app then staying that way should 
maintain accuracy, especially when you want accuracy for more than say 3-5 
decimal digits in total.


Secondly, for financial applications, accuracy trumps performance.  Converting 
to Single at any time is just going to cause you trouble.


And really, are you sure using Single would actually help performance?  Have you 
measured it in a real workload?  More likely on modern devices it won't be any 
faster and you will have lost your accuracy for nothing.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Currency & SQLite

2016-07-05 Thread Darren Duncan

Hello Ertan,

On 2016-07-05 1:59 PM, Ertan Küçükoğlu wrote:

I have following data to store:

My currency values only has 2 decimal digits. (Sample: 1.23)

My amount values goes around 5-6 decimal digits. (Sample: 1.123456)


So what do the 2 fields actually represent?  Are Amount and Money both amounts 
of currency?  If so, why are they stored to different precisions?  The field 
names should be more precise.



If I am to use below SQL command:

CREATE TABLE IF NOT EXISTS ACCOUNTING(

   ID Integer NOT NULL PRIMARY KEY AUTOINCREMENT,

   AmountField float not null,

   MoneyField float not null

);

Will it be fine to store these values, and I will retrieve exactly what I
save?

Stored 1.123456 and retrieved 1.123456 exactly, no rounding or anything.


A float field will never store a fractional decimal value exactly, what it will 
store is just a binary approximation.  However, if the precision you need is low 
enough relative to the size of the float then it may appear to work.


A key thing here is how you represent the values while they are being 
transmitted between SQLite and your application.  If you are storing them using 
runtime-bound placeholders rather than literal SQL, and your placeholders are 
also floats of the same precision SQLite uses, then you can be guaranteed that 
whatever your application gave to SQLite, it will get back.  If the values are 
inserted as part of the SQL string then it will be rounded.


Its also important to know if you are doing any math with these values in SQL, 
eg sums, or if you are just storing and retrieving.  Inexact values would bite 
you mainly if you do math.



One thing I cannot do in my case is to store money values as integers
multiplied by 100.


And why not?  That often is still the best advice.

Really its a matter of conception, what is the unit of the currency?

If say your currency was dollars, just say that the field is storing the number 
of cents rather than the number of dollars, and do all your math assuming it is 
adding/multiplying/etc counts of cents.  Nothing wrong with that.  Cents are 
still just as valid currency in the real world.


Another thing you can do is store the numbers as text fields instead, which 
works perfectly with situations where you input them literally in a SQL string; 
in appearance you can put the decimal point whereever you want and the database 
will remember it exactly, this even works with hundreds of decimal digits or 
something over the bounds of the numeric types.  But if you do math in the 
database then you may get rounding errors, but if you do math in your app then 
your app may have an exact numeric type.



I appreciate all advises.

Thanks.

Ertan Küçükoğlu


-- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] page checksums (was Re: Bad db feature request)

2016-06-29 Thread Darren Duncan

On 2016-06-29 8:12 AM, Roger Binns wrote:

On 29/06/16 07:51, Dominique Devienne wrote:

I wish for the day SQLite has page checksums to detect any such random
corruption.


Agreed.  The SQLite team rejected doing so:

   http://www.sqlite.org/src/tktview/72b01a982a84f64d4284


Yes, I know, it's a format change, and will likely slow things down a
little, but it's worth it IMHO.


Note that it isn't as big a change as you think, and could be done
today.  SQLite already allows a portion of each page to be used for
other purposes, with the big user being encryption.


I notice that the ticket rejection didn't include any rationale or explanation, 
or I didn't find any when I looked.  What was the rationale for rejecting that 
ticket?


I believe that SQLite having page checksums would be a good idea whose time has 
come.  Even Postgres on whom SQLite takes a lot of influence has had that 
feature for the last 2.5 years.


This should be doable as an optional-per-file feature, like some other features 
like foreign keys are optional.  If the feature is used, that is a file format 
break so older SQLite versions won't attempt to modify a file, and if a file 
doesn't use the feature then older SQLite versions will still work with it.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use "cursors" in c#

2016-06-28 Thread Darren Duncan
As a further response, regarding that Martina said "insert ... new row and get 
back the id of this last inserted row", I believe that this likely reflects a 
poor design.  Wherever possible, every field of a row to insert including its 
identifiers should be known BEFORE inserting the row, and that way, you don't 
need to ask afterwards what the id is, you already know because it is the value 
you told it to use when doing the insert. -- Darren Duncan


On 2016-06-27 11:58 PM, Hick Gunter wrote:

Do not use SQLite for concurrent access over a network connection. Locking 
semantics are broken for most network filesystems, so you will have corruption 
issues that are no fault of SQLite. If your application requires concurrent 
network access, you should be using either a network client/server DBMS or 
programming your own Client/Server pair with the server process calling the 
SQLite API to modify the db file held on the servers' local filesystem.

SQLite uses file level locking as documented on the SQLite main page.

A cursor (CURrent Set Of Records) is a concept designed for reading data, not writing. SQLite does not 
support navigation other than retrieving the result set in the order specified in the select statement. No 
"paging" backwards. It is possible to simulate a cursor, but the "simple" solutions may 
be imperformant for large result sets (e.g. producing half the result set to display a small number of 
records on a "middle page").

I assume you are looking for "transactions". SQLite supports both explicit transactions 
(BEGIN ... COMMIT/ROLLBACK) and "automatic" Transactions (each statement is ist own 
transaction). If you are just inserting/modfying a set of rows (this also includes a single row) 
within a single table, automatic transactions are sufficient. If you require multiple changes to 
multiple rows in one or more tables to complete together or not at all, then you need an explicit 
transaction.

Gunter

-Ursprüngliche Nachricht-
Von: sqlite-users-boun...@mailinglists.sqlite.org 
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im Auftrag von Martina 
Pasini
Gesendet: Montag, 27. Juni 2016 20:51
An: sqlite-users@mailinglists.sqlite.org
Betreff: [sqlite] How to use "cursors" in c#

Hello to Everyone!

I am sorry to bother you, but I am new to DB and SQLite and I have a doubt.

I am programming in c# and I have to insert into a .sqlite file a new row and get back 
the id of this last inserted row. This file "unfortunately"
might be located on an internal network and other users might try to add other 
rows in the meantime.

I have not studied yet how to manage locking etc (will I need it only when 
editing a row or also when adding a new one?)

For this case, when I add a new row, could it be sufficient to use a cursor or 
something similar? Could you give me some advice/links I should read to 
understand how to manage this process?

Thank you very much for you kind attention!

Best Regards,

Martina


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sweet 16

2016-05-29 Thread Darren Duncan

On 2016-05-29 10:28 AM, Richard Hipp wrote:

The first check-in of SQLite code occurred 16 years ago today.
https://www.sqlite.org/src/timeline?c=2000-05-29


Congratulations on that!  Thank you for all of that work.  Stay strong and keep 
moving on.


Coincidentally, that first commit date is within a month or so of the first time 
I ever released open-source code to the public (on CPAN).


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Postgres vs MySQL (was Re: Messages posted on Nabble not getting to list)

2016-05-29 Thread Darren Duncan

On 2016-05-28 12:49 PM, r.a.n...@gmail.com wrote:

@Daren

Any reasons for the thumbs down on MySQL? Their workbench is better that Toad 
...


On May 27, 2016, at 10:00 PM, Darren Duncan <dar...@darrenduncan.net> wrote:


On 2016-05-27 2:28 PM, Balaji Ramanathan wrote:
But when I was debating between MySQL and
SQLite for my project, I almost didn't choose SQLite because of the archaic
look and feel of the sqlite.org website and support options available
there.


For the love of all that's good, don't choose MySQL for anything.  If you want 
something bigger than SQLite, look at Postgres instead of MySQL.  As a bonus, 
the Postgres documentation is much better. -- Darren Duncan


r.a.nagy,

My judgement is based primarily on the DBMS server itself, which is the product 
being compared, not on separate client programs.  (And each DBMS has multiple 
clients that work with it.)


For practical decision purposes, Postgres and MySQL both compete in the same 
space as each other, multi-user client-server DBMSs.  This is a different space 
than SQLite competes in.  If you're doing a task appropriate to the space SQLite 
is in, I recommend using SQLite.  If you're doing a task appropriate to a 
multi-user client-server DBMS, I recommend for Postgres and against MySQL.


Comparing the two...

Postgres is a high-quality project with a semi-regular predictable release 
schedule and has a strong emphasis on good quality and good security, being as 
bug-free as possible and in not losing data.  Postgres has an order of magnitude 
of more, useful, features, and gains a lot more that people can notice each 
year.  Its design decisions make more practical sense and it strives to be a lot 
more compatible with the SQL standard where that makes sense.  In summary, it 
has tons more features people actually use and find valuable, and it has a 
strong emphasis on keeping the quality up.  When relatively rare bugs or 
security issues do occur, they are fixed promptly and clear documentation is 
given on how to mitigate or recover from the problem.  Postgres also has better 
general user documentation.  Postgres even has useful features that Oracle 
doesn't have.  Postgres also has expert level support from multiple companies, 
and its BSD license (almost like public domain but not quite) means it can be 
used in any applications without a special license.


MySQL is a lower-quality project that has historically focused more on user 
bases that don't put as much importance on the quality or persistence of their 
data and want to use the DBMS more as a dumb data store with most work done 
application-side.  MySQL has an order of magnitude fewer useful features, often 
lacking things that are quite valuable in practice, and a lot of the features it 
does have carry various gotchas or strange behaviors.  As such, performing a lot 
of tasks is more difficult or not possible without excessive circumlocution, and 
users are more likely to see wrong answers or data loss due to either unexpected 
behaviors or bugs.  MySQL is notorious for shipping half-baked code to 
production, see version 5.1 in particular.  MySQL is also notorious for going a 
long time claiming that particular important features are not important.  This 
includes a number of features that even SQLite has long had.  MySQL also has 
gone a long time without perceptively adding new features with much 
significance.  Their main advance in features was between versions 3 and 5.1, 
the last being a decade ago, and since then have mostly talked more about things 
like speed enhancements and not much on interesting features.  The MySQL 
documentation isn't as good.  MySQL has a twin GPL/proprietary license so you 
need to pay Oracle lots of money if you want to use it in some applications, and 
Oracle doesn't have as much motivation to make it a contender when they have 
their other bread and butter DBMS to compete with.


So MySQL is decently suited say for running stuff like blogs or message boards, 
while if you want to do something important like run a financial accounting 
system or medical database, Postgres should be used instead.


This assumes applications warranting client-server.  SQLite should also be used 
instead where it makes more sense, and SQLite is much more thorough about 
quality and being bug free than Postgres, with its 100% code coverage test suite 
and all.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Messages posted on Nabble not getting to list

2016-05-27 Thread Darren Duncan

On 2016-05-27 2:34 AM, Tim Streater wrote:

On 27 May 2016 at 08:56, Darren Duncan <dar...@darrenduncan.net> wrote:


On 2016-05-26 9:00 PM, Balaji Ramanathan wrote:

The main advantage of forums, and I follow a bunch of them, is that I choose
when I want to stop my regular day job and be distracted by them rather than
emails coming in and distracting me all the time.


That's not an argument for web forums, rather that's an argument for not using
your work email to subscribe to non-work discussion lists; use a non-work email
for the discussion lists instead.  You can also configure your email client to
only check email when you tell it to rather than constantly. -- Darren Duncan


Filter the sqlite mails into their own mailbox. They can then be read at a 
convenient moment.


That's exactly what I do.  I have my PERSONAL email address that I subscribe to 
a few dozen email lists with, and have automatic filters setup to shuffle each 
list into its own mailbox.  When I have separate work-provided email addresses, 
I don't use them for anything but internal communication in that workplace, I 
don't subscribe to email lists with them.



I certainly don't want a whole lot of sub-forums and the like, each of which 
has to be checked in case there's something interesting there.


I don't want a web forum either, or at least not as a replacement for this email 
list.


For that other use case someone mentioned about some people just have one-off 
questions, forums provided by others such as Stack Overflow handle the job fine. 
 For official SQLite maintainers' forums, this email list is good.


-- Darren Duncan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Messages posted on Nabble not getting to list

2016-05-27 Thread Darren Duncan

On 2016-05-27 2:28 PM, Balaji Ramanathan wrote:

But when I was debating between MySQL and
SQLite for my project, I almost didn't choose SQLite because of the archaic
look and feel of the sqlite.org website and support options available
there.


For the love of all that's good, don't choose MySQL for anything.  If you want 
something bigger than SQLite, look at Postgres instead of MySQL.  As a bonus, 
the Postgres documentation is much better. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Messages posted on Nabble not getting to list

2016-05-27 Thread Darren Duncan

On 2016-05-26 9:00 PM, Balaji Ramanathan wrote:

The main advantage of forums, and I
follow a bunch of them, is that I choose when I want to stop my regular day
job and be distracted by them rather than emails coming in and distracting
me all the time.


That's not an argument for web forums, rather that's an argument for not using 
your work email to subscribe to non-work discussion lists; use a non-work email 
for the discussion lists instead.  You can also configure your email client to 
only check email when you tell it to rather than constantly. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Searching this mailing lsit

2016-05-24 Thread Darren Duncan

On 2016-05-24 12:23 PM, Simon Slavin wrote:

On 24 May 2016, at 7:46pm, Kasajian, Kenneth 
<kenneth.kasaj...@schneider-electric.com> wrote:


It's funny how many messages there are here about the signature of my e-mail 
(which I cannot control)


Then please stop posting to public fora from that account.


Yes, exactly.  Unless they're communicating formally on behalf of an 
organization, there is no need to carry on public discourse using an email 
account provided by an organization; a separate personal account works just 
fine, and should also outlast being a member of the organization. -- Darren Duncan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite incompatibility with Postgres

2016-05-19 Thread Darren Duncan
On 2016-05-19 9:17 PM, dandl wrote:
> Every aggregation function is at least second order: a function that applies
> a function to the set. So for MIN the function is 'less than', for SUM() the
> function is 'plus' and so on. In Andl aggregation functions are provided by
> fold(), which takes a function as an argument.

Actually, MIN still is fundamentally a first-order itself.  The dyadic function 
call "x min y" returns either x or y depending on how they compare.  The list 
form is then repeated application of the binary min().  This is directly 
comparable to your example of list plus/sum which is repetition of the dyadic 
"x 
+ y".  List MIN is NOT a repeated application of "x less than y". -- Darren 
Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-18 Thread Darren Duncan
On 2016-05-18 2:19 AM, Stefan Evert wrote:
>> On 18 May 2016, at 02:41, dandl  wrote:
>>
>> Then you are mistaken.
>> 1. Consider the following set S of integers: 1,3,5,7,42,99,83,11.
>> 2. Divide it into two subsets such that S1 is of size 3 and all members of
>> S1 are larger than those in S2.
>>
>> A sort is unnecessary -- there are many algorithms that can do that purely
>> based on set logic, the ability to compare members and the ability to
>> determine the cardinality of a set.
>
> I think the point is that your query may not have a solution if there are 
> ties, i.e. the ordering is not complete:
>
> Consider this set of integers: 1,3,5,7,42,99,83,11,83,83
>
> In this case, there is no subset S1 of size 3 that satisfies your criterion.  
> In an SQL query, the set returned by LIMIT 3 would not be defined uniquely.

What you've both said is essentially the point I was trying to make.

1.  If you want a deterministic portable result for all valid invocations of 
LIMIT, you need to either constrain it to use with a totally ordered set (it 
would be an error to use it on something with duplicates) in order to guarantee 
the number of rows specified in the LIMIT argument, or you need to possibly 
return a different number of rows than the LIMIT argument.

2.  Otherwise, if exactly the number of specified rows must be returned without 
other restrictions, then the result is possibly indeterminate.

The options with point 1 are not only deterministic but fully relational.

-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-16 Thread Darren Duncan
On 2016-05-15 11:17 PM, dandl wrote:
>>> All true.  But it brings up a question.  Suppose the following:
>>>
>>> first   second
>>> -   --
>>> MarkSpark
>>> Emily   Spark
>>> MarySoper
>>> Brian   Soper
>>>
>>> SELECT first,second FROM members ORDER BY second LIMIT 3
>>>
>> I think a proper solution for this then is to treat the LIMIT as
> approximate
>> rather than exact; it indicates a desire rather than a promise.
>>
>> In the scenario you describe, the query should return either 2 rows or 4
>> rows, so that ALL of the rows whose second field value of "Spark" are, or
> are
>> not, returned.  Projecting this to there not being an ORDER BY clause,
> either
>> all rows are returned or zero rows are returned.  Thus the result is
>> deterministic.
>>
>> Whether returning above or below the limit is done, is a separate thing to
>> decide, though I suggest returning above is better.
>
> I would say that this is an invalid query. As already applies for DISTINCT
> and GROUP BY, the query parser should require that every column in the
> column list should appear in the ORDER BY list. If it does not, then the
> result is indeterminate.
>
> Sqlite already permits indeterminate queries, but other SQL engines do not.

Not EVERY column, just a superkey of the result columns should suffice.
-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-16 Thread Darren Duncan
On 2016-05-15 10:50 PM, J Decker wrote:
> On Sun, May 15, 2016 at 10:02 PM, Darren Duncan  
> wrote:
>> On 2016-05-15 9:56 PM, J Decker wrote:
>>>
>>> On Sun, May 15, 2016 at 9:29 PM, Darren Duncan 
>>> wrote:
>>>>
>>>> On 2016-05-15 12:35 AM, Simon Slavin wrote:
>>>>>
>>>>> All true.  But it brings up a question.  Suppose the following:
>>>>>
>>>>> first   second
>>>>> -   --
>>>>> MarkSpark
>>>>> Emily   Spark
>>>>> MarySoper
>>>>> Brian   Soper
>>>>>
>>>>> SELECT first,second FROM members ORDER BY second LIMIT 3
>>>>>
>>>>> Without looking up either a standards document for SQL or the
>>>>> documentation for your favourite implementations of SQL, answer this
>>>>> question:
>>>>>
>>>>> Does the documentation for your favourite implementation of SQL state
>>>>> that
>>>>> you'll get the same rows every time you execute the above "SELECT" ?
>>>>
>>>>
>>>> I think a proper solution for this then is to treat the LIMIT as
>>>> approximate
>>>> rather than exact; it indicates a desire rather than a promise.
>>>>
>>>> In the scenario you describe, the query should return either 2 rows or 4
>>>> rows, so that ALL of the rows whose second field value of "Spark" are, or
>>>> are not, returned.  Projecting this to there not being an ORDER BY
>>>> clause,
>>>> either all rows are returned or zero rows are returned.  Thus the result
>>>> is
>>>> deterministic.
>>>
>>> even if it did for 'spark' rows (which either case you suggest would
>>> be horrible) 'soper' would still be non-deterministic, and rebuilding
>>> indexes could reorder the results.
>>
>> No, it is still deterministic.
>>
>> The ORDER BY clause specified a partial order of the results, not a total
>> order.
>>
>> What I specified returns only complete groups of rows where within each
>> group the rows are unordered but the groups as a whole are ordered relative
>> to each other.
>>
>> The fact this is deterministic would probably be more clear if the result
>> rows were nested, one outer row per "group" that I mentioned.  But even if
>> not, the caller knew that they were only ordering by second but selecting
>> first, so if they see multiple rows with the same second value, they know
>> that those rows are not sorted between themselves, only that rows with
>> different second values are sorted relative to each other.
>>
>> So fully deterministic.
>
> 'SELECT first,second FROM members ORDER BY second LIMIT 3' (that's
> mysql format right?)
>
> I don't see a full set as a requirement (such that the output would be
> 2 or 4 records and not the 3 I asked for...) .  the query implies 3
> rows, not 3 sets.

I never said interpret it as 3 sets, I said round the number of rows to the 
nearest whole set (in either the up or down direction).

> SELECT first,second FROM members ORDER BY second LIMIT 3,3 (for the
> next 3 lines I'm displaying on a form for instance)
>
> and specifying that the result set includes a first name, the result
> sets taken as a hole are not guaranteed equal (procedurally and in
> practice they may be, but pessimistically...).

If someone is doing pagination, then at least one of these scenarios should be 
true:

1.  The ORDER BY is being done on a superkey, which guarantees a total order.

2.  Row groups of > 1 row that cross a boundary are returned by BOTH queries, 
eg 
"ORDER BY second LIMIT 3 OFFSET 0" and "ORDER BY second LIMIT 3 OFFSET 3" would 
both return both "Spark" rows, thus ensuring there are no holes.

Clearly, option 1 is better, but failing that, having pages that overlap is NOT 
a bad thing and even has precedent (assuming users are made aware pages can 
overlap), such as often happens in say a GUI scrolling list.

On a further note...

While this behavior isn't something easily portable across DBMSs, SQLite can 
give deterministic results for itself, at least for "with rowid" tables, if it 
implicitly sorts by the rowid after the explicit terms, as if the user had 
written "rowid" at the end of the ORDER BY list.  Since the rowid is an actual 
table column exposed to the user and not just a hidden implementation detail, 
this is consistent with the relational model.

-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread Darren Duncan
On 2016-05-15 9:56 PM, J Decker wrote:
> On Sun, May 15, 2016 at 9:29 PM, Darren Duncan  
> wrote:
>> On 2016-05-15 12:35 AM, Simon Slavin wrote:
>>>
>>> All true.  But it brings up a question.  Suppose the following:
>>>
>>> first   second
>>> -   --
>>> MarkSpark
>>> Emily   Spark
>>> MarySoper
>>> Brian   Soper
>>>
>>> SELECT first,second FROM members ORDER BY second LIMIT 3
>>>
>>> Without looking up either a standards document for SQL or the
>>> documentation for your favourite implementations of SQL, answer this
>>> question:
>>>
>>> Does the documentation for your favourite implementation of SQL state that
>>> you'll get the same rows every time you execute the above "SELECT" ?
>>
>> I think a proper solution for this then is to treat the LIMIT as approximate
>> rather than exact; it indicates a desire rather than a promise.
>>
>> In the scenario you describe, the query should return either 2 rows or 4
>> rows, so that ALL of the rows whose second field value of "Spark" are, or
>> are not, returned.  Projecting this to there not being an ORDER BY clause,
>> either all rows are returned or zero rows are returned.  Thus the result is
>> deterministic.
>
> even if it did for 'spark' rows (which either case you suggest would
> be horrible) 'soper' would still be non-deterministic, and rebuilding
> indexes could reorder the results.

No, it is still deterministic.

The ORDER BY clause specified a partial order of the results, not a total order.

What I specified returns only complete groups of rows where within each group 
the rows are unordered but the groups as a whole are ordered relative to each 
other.

The fact this is deterministic would probably be more clear if the result rows 
were nested, one outer row per "group" that I mentioned.  But even if not, the 
caller knew that they were only ordering by second but selecting first, so if 
they see multiple rows with the same second value, they know that those rows 
are 
not sorted between themselves, only that rows with different second values are 
sorted relative to each other.

So fully deterministic.

-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-15 Thread Darren Duncan
On 2016-05-15 12:35 AM, Simon Slavin wrote:
> On 15 May 2016, at 6:04am, Darren Duncan  wrote:
>
>> You seem to be forgetting the fact that LIMIT/OFFSET is not its own clause, 
>> rather it is an extension to the ORDER BY clause and only has meaning within 
>> the context of the ORDER BY it is part of.
>
> All true.  But it brings up a question.  Suppose the following:
>
> first second
> - --
> Mark  Spark
> Emily Spark
> Mary  Soper
> Brian Soper
>
> SELECT first,second FROM members ORDER BY second LIMIT 3
>
> Without looking up either a standards document for SQL or the documentation 
> for your favourite implementations of SQL, answer this question:
>
> Does the documentation for your favourite implementation of SQL state that 
> you'll get the same rows every time you execute the above "SELECT" ?

I think a proper solution for this then is to treat the LIMIT as approximate 
rather than exact; it indicates a desire rather than a promise.

In the scenario you describe, the query should return either 2 rows or 4 rows, 
so that ALL of the rows whose second field value of "Spark" are, or are not, 
returned.  Projecting this to there not being an ORDER BY clause, either all 
rows are returned or zero rows are returned.  Thus the result is deterministic.

Whether returning above or below the limit is done, is a separate thing to 
decide, though I suggest returning above is better.

-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-14 Thread Darren Duncan
On 2016-05-14 11:30 AM, James K. Lowden wrote:
> I suggest the reason LIMIT hasn't been standardized is that it's
> contrary to the fundamental idea that rows in a table have no
> meaningful order.  SQL doesn't honor relational theory with complete
> fidelity, but at least that horse is still in the barn.
>
> The problem with LIMIT is it's not based in the data.  Cutting off
> results at some arbitrary N tells you *nothing* about the data other
> than that N or more rows met the criteria.  Note that predicate logic
> has constructs for "for all" and "there exists" , but not "are some"!

You seem to be forgetting the fact that LIMIT/OFFSET is not its own clause, 
rather it is an extension to the ORDER BY clause and only has meaning within 
the 
context of the ORDER BY it is part of.

That being said, one could argue that LIMIT by itself (no offset) could be 
standalone, but then without an ORDER BY all say "LIMIT N" means is "give me a 
random subset of size N of the rows", but then there probably is alternate 
syntax that may say this more explicitly, eg "PICK N".

-- Darren Duncan



[sqlite] Sqlite incompatibility with Postgres

2016-05-13 Thread Darren Duncan
On 2016-05-13 7:07 AM, dandl wrote:
> I checked a copy of the
> 2003 standard and there doesn't seem to be anything similar. I don't have
> anything later.

Whitemarsh is your friend.

http://www.wiscorp.com/SQLStandards.html

They have a copy of the SQL 2011/2 draft there, under the erroneous title 
"SQL:20nn Working Draft Documents".

The actual PDF files are datestamped 2011 Dec 22.

Unless you need something 100% perfect, those are for all intents and purposes 
the same as the official standard.

I've relied on the up to date texts of that website for the last 15 years or so.

-- Darren Duncan



[sqlite] Good way for CEIL, or is there a better way

2016-05-10 Thread Darren Duncan
On 2016-05-10 12:03 AM, Cecil Westerhof wrote:
> But it looks like that the following is also acceptable:
> ?
> CAST(ROUND(100.0 * rank / outOf + 0.499) AS int) AS percentage
> and it is a lot simpler. So probably I will go for this.

That might seem simpler but on edge cases it would fail.  Having an if-then 
version is more reliable in general.  Also less ambiguous as ROUND doesn't 
behave the same everywhere. -- Darren Duncan




[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Darren Duncan
On 2016-05-09 7:54 PM, Rowan Worth wrote:
> On 10 May 2016 at 08:31, Darren Duncan  wrote:
>
>> The Ceiling function is not that simple, unless you know that your rank
>> and outOf are always non-negative numbers.  If they might be negative, you
>> would -1 rather than +1 when the result is negative. -- Darren Duncan
>>
>
> Yeah you can't always add one. But you never need to subtract one - ceil
> always rounds towards positive infinity which is equivalent to integer
> truncation over the range of negative reals.
>
> I can't see a better way to implement ceil/floor in sqlite than using an
> int cast to truncate.

Yeah, actually.

So if we assume casting to an integer will always truncate aka round towards 
zero, then the answer is to add 1 to the casted amount if and only if the real 
percentage is positive and doesn't equal the cast amount.

-- Darren Duncan



[sqlite] Good way for CEIL, or is there a better way

2016-05-09 Thread Darren Duncan
On 2016-05-09 4:24 AM, Cecil Westerhof wrote:
> I need to have a CEIL function in SQLite. This is the way I implemented it:
> WITH percentage AS (
>  SELECT date
>  ,   100.0 * rank / outOf AS percentage
>  ,  CAST(100.0 * rank / outOf AS int) AS castedPercentage
>  FROM ranking
> )
> SELECT date
> ,  (CASE WHEN percentage = castedPercentage
> THEN castedPercentage
> ELSE castedPercentage + 1
> END) AS percentage
> FROM percentage
>
> Is this a good way, or is there a better way?

The Ceiling function is not that simple, unless you know that your rank and 
outOf are always non-negative numbers.  If they might be negative, you would -1 
rather than +1 when the result is negative. -- Darren Duncan



[sqlite] SQLite vs GPLed software

2016-05-02 Thread Darren Duncan
On 2016-05-02 6:24 PM, dandl wrote:
>>From a purely numerical point of view, the largest numbers would be found in
> devices, eg phones, cars, TVs, clocks/timers, etc. Lots of GPL in there, but
> also lots of other licences too. I have no idea how the GSM stack is
> licensed, for example, but I think there are more GSM phones than instances
> of Sqlite.
>
> And I certainly can't think of any cases where a GPL licence has been of
> specific benefit in achieving wider usage, as compared to the approach taken
> by Sqlite.
>
>> -Original Message-
>> From: sqlite-users-bounces at mailinglists.sqlite.org [mailto:sqlite-users-
>> bounces at mailinglists.sqlite.org] On Behalf Of Richard Hipp
>> Sent: Tuesday, 3 May 2016 10:11 AM
>> To: SQLite mailing list 
>> Subject: Re: [sqlite] SQLite vs GPLed software
>>
>> On 5/2/16, Scott Robison  wrote:
>>> I don't have an answer, but given that each linux distro of dozens or
>>> hundreds of independently sourced packages has many separate instances
>>> of the GPL, that would begin to eat into SQLITE'S lead. But probably
>>> not enough to win.
>>
>> What GPLed packages are on Android?  Compared to Android, all other Linux
>> distros are rounding error, I think (correct me if I'm wrong).
>>
>> Also, what percentage of those other GPLed packages statically link
> against
>> SQLite?

As far as I know the vast majority of servers on the internet are Linux, and 
while this is less than the number of phones, I would hardly call their 
absolute 
numbers a rounding error. -- Darren Duncan



[sqlite] Working with blob

2016-04-28 Thread Darren Duncan
I strongly recommend that you store a SHA-256 hash (or pick a better hash 
algorithm if there is one) of the file content as a blob in the database as the 
canonical way to identify the file.  This is much more reliable than doing it 
with the file's name, and any filename info can then be metadata without having 
to be a unique identifier.  This way you don't confuse 2 files with the same 
name but different content or vice-versa, and you gain other benefits. -- 
Darren 
Duncan

On 2016-04-28 3:43 PM, R Smith wrote:
> On 2016/04/28 8:27 PM, deltagamma1 at gmx.net wrote:
>> What is a convenient way to store the path from a external blob (jpg,
>> pdf) ?
>> How can I retrieve this blob ?
>> Is there a Frontend which opens the jpg directly from the listed
>> table-content ?
>>
>> If I store the blob directly in the sqlite, is there a way to open the
>> blob directly with the respective programm (e.g. irfanview or a pdf with
>> acroread) ? I just discovered recently sqlitespeed, but do not know how
>> to handle the problems described above.
>
> A blob is just a set of bytes to the DB engine, it usually knows nothing about
> what those bytes represent. Some DB admin tools or GUIs may include ways to
> display the bytes (if they can detect that they are some standard format).  
> Some
> other programs, such as Irfanview, might happily display images from a DB, but
> it would likely need a very specific schema to go on.
>
> Typically, you would be writing a program that uses the SQLite API to store
> blobs and retrieve them from some byte or stream you choose, into and from the
> formats you choose.
>
> What I do is save both the blob and the file name (and perhaps the original 
> path
> if you want to treat it as a backup) in separate columns (fields) and then, 
> once
> I would like to open it, I just recreate the file (in whatever destination I
> would like, the TEMP folder is a good idea) with the correct file name, and 
> then
> execute/open it - which should have your system open it with whatever program 
> is
> registered to open such a file.
>
> SQLite is not really concerned with what is in your blobs - just how to best
> store and retrieve them. Your program can do all kinds of magic with the blobs
> and SQLite will ensure you can save and load them fast and easy.
>
> Best of luck,
> Ryan
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] No datasize field - why?

2016-04-23 Thread Darren Duncan
On 2016-04-23 4:58 PM, Scott Robison wrote:
> On Sat, Apr 23, 2016 at 12:50 PM, Keith Medcalf  
> wrote:
>>
>> When a file is a bag-o-bytes rather than a collection-of-records (fixed
>> length records at that), there is no longer any need to keep the records
>> the same length, hence no need to specify the "reserved length" of things
>> (contrary to common mis-belief, varchar(100) does not mean a "variable
>> length character field with a length up to 100 characters", it means a
>> character field with a length of 100 into which a smaller amount of data
>> may be stored that will be padded to 100 characters with special
>> discarded-upon-retrieval padding characters" -- that is, the difference
>> between CHAR(100) and VARCHAR(100) is that in the former case data less
>> than 100 characters is padded with spaces, and in the latter case it is
>> padded with nulls.  The column is still, however, 100 characters wide.
>
> That's not necessarily true in the case of VARCHAR. It could be done that
> way, but it could just as easily and probably much more efficiently be done
> in a truly variable length fashion. Microsoft SQL Server (as far as I
> understand, being the only other SQL engine I've spent much time with)
> stores VARCHAR columns as the actual number of bytes in the field plus two
> bytes of overhead. So a 100 byte string stored in a VARCHAR(1) column
> would only need 102 bytes of storage, not 1 or 10002.
>
> My source (because I haven't used MSSQL for a few years) is
> https://msdn.microsoft.com/en-us/library/ms176089.aspx to refresh my memory.
>
> For any SQL datastore, the way the data is stored is completely an
> implementation detail. The SQL engine would be free to serialize all values
> to text and store them in 4K pages if it wanted to, then deserialize them
> on the way back. I certainly don't know of any that do that, but the
> impetus for the creation of VARCHAR fields (I imagine) was specifically to
> avoid storing padding for data that did not require it.

I know that in Postgres, behind the scenes, all character string types are 
implemented as the variable unlimited length "text" type even when they are 
declared as char(n) or varchar(n).  Therefore, there is no performance benefit 
to explicitly declaring fields with a maximum length.  This also fits in with a 
proper clean data model, where you just declare details significant to the 
business model and not to dictate how resources are managed.  You should only 
declare a length if that is actually significant to the validity of the data, 
and then its functionally just like a CHECK constraint. -- Darren Duncan



[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-23 Thread Darren Duncan
On 2016-04-23 1:20 AM, Michele Pradella wrote:
> I have an In-Memory DB that is written and read from connections of the
> same process. All good with shared cache, but I found that TableLock
> occur more often on In-Memory than on disk DB, probably because in
> memory we can't use WAL.
>
> Anyway I found the PRAGMA read_uncommitted that from documentation seams
> a way to read without the problem of table lock. The question is about
> this sentence "This can lead to inconsistent query results if another
> database connection modifies a table while it is being read".
> "inconsistent" means just "out of date"? or there can be some other type
> of inconsistent data?

Usually "inconsistent" means you see a result that is partially an old value 
and 
partially a new one, such as if you open a photograph and find that the top 
half 
was overwritten with a different photo; you get the incomplete results of a 
change in progress, for example, say if one is recording a bank funds transfer 
with one record showing an account debit and another showing an account credit, 
readers may only see one of those and then they see a database where the 
numbers 
don't add up properly, as if it were corrupt; that is what "inconsistent" 
means. 
-- Darren Duncan




[sqlite] No datasize field - why?

2016-04-22 Thread Darren Duncan
The general case of a data type definition is an arbitrarily complex predicate 
expression whose parts vary on the base type and other factors.  Given this, if 
component details of type definitions were split out into their own 
table_info() 
columns, you'd have a large number of columns where most are inapplicable in 
any 
given case, eg some only apply to strings, some only to numbers, etc.  And so, 
just expressing the type definition as a SQL fragment like table_info() 
currently does provides a compact generic representation with all the details, 
same as in CREATE TABLE. -- Darren Duncan

On 2016-04-22 6:09 AM, Igor Korot wrote:
> Hi, ALL,
>
> [code]
> SQLite version 3.9.2 2015-11-02 18:31:45
> Enter ".help" for usage hints.
> sqlite> PRAGMA table_info(league);
> sqlite> PRAGMA table_info(leagues);
> 0|id|integer|0||1
> 1|name|varchar(100)|0||0
> 2|drafttype|integer(1)|0||0
> 3|scoringtype|integer(1)|0||0
> 4|roundvalues|integer(1)|0||0
> 5|leaguetype|char(5)|0||0
> 6|salary|integer|0||0
> 7|benchplayers|integer(1)|0||0
> sqlite>
> [/code]
>
> The field type is set to be "varchar(100)" for the name field.
> Wouldn't it be more logical to have it as "varchar" and have another field
> for data size?
>
> Thank you.
>
> P.S.: I don't know how most databases interpret this, I know ODBC does it
> this way



[sqlite] regression in 3.12.0 vs. 3.11.0, column type information in PRAGMA missing

2016-04-06 Thread Darren Duncan
On 2016-04-06 6:00 AM, Cezary H. Noweta wrote:
> On 2016-04-06 09:43, Darren Duncan wrote:
>> On 2016-04-05 10:19 AM, Richard Hipp wrote:
>
>>> It seems to me that the most consistent answer is that the "type" of
>>> columns in a VIEW should always be an empty string.
>
>> That's only consistent if you do the same thing with base tables.
>
> Non--consistency:
>
> sqlite> CREATE TABLE a (a DECIMAL(10));
> sqlite> CREATE TABLE b AS SELECT * FROM a;
> sqlite> CREATE VIEW bv AS SELECT * FROM a;
> sqlite> PRAGMA table_info(b);
> 0|a|NUM|0||0
> sqlite> PRAGMA table_info(bv);
> 0|a||0||0
>
> Info on column's affinity (besides if it is a declared one --- DECIMAL, FLOAT
> --- or a real one --- NUMERIC, REAL) is still useful, when comparison is made.
>
> Column's affinity is still propagated:
>
> sqlite> CREATE TABLE b2 AS SELECT * FROM bv;
> sqlite> PRAGMA table_info(b2);
> 0|a|NUM|0||0
>
> so the point is that ``PRAGMA table_info'' stopped displaying column's 
> affinity
> in case of views.

I agree with you in the sense that CREATE TABLE AS SELECT and CREATE VIEW AS 
SELECT with the same SELECT should produce results with identical type info.

> 
>
>> Alternately, you can exploit the explicit column list that is optional
>> in a CREATE VIEW:
>>
>>CREATE VIEW v3 (foo NUMERIC) AS SELECT w+x+y+z AS foo FROM t2;
>
> Syntax error. Did you mean:
>
> CREATE VIEW v3 (foo) AS SELECT CAST(w+x+y+z AS NUMERIC) FROM t2;

I actually meant what I said, wherein the column list is declared using the 
same 
syntax as one would use for a CREATE TABLE, meaning with types.

However, your use of explicit CAST syntax is possibly a better solution in one 
sense, in that the SELECT itself is completely unambiguous to human readers 
what 
the result type is.  It also fits right in with what I think is the best 
solution, that column type info just is derived from the SELECT expression.

-- Darren Duncan



[sqlite] regression in 3.12.0 vs. 3.11.0, column type information in PRAGMA missing

2016-04-06 Thread Darren Duncan
On 2016-04-05 10:19 AM, Richard Hipp wrote:
> This could easily be considered a bug fix rather than a regression.
> Please explain why you think it is important to know the "type" of a
> column in a view?

One should be able to treat a view the same as a base table, not even having to 
know whether a table is one type or the other, when it doesn't matter.  Type 
information should be as readily available, or not, for a view as a base table.

> Or, consider this situation:
>
>  CREATE TABLE t2(w SHORT INT, x DECIMAL, y BIGINT, z REAL);
>  CREATE VIEW v3 AS SELECT w+x+y+z FROM t2;
>
> What should "PRAGMA table_info('v3')" report as the column type?

The answer to that question is, the same type as the type that "+" results in 
given those arguments.  Either the result of that summation is of some 
particular data type, eg NUMERIC, in which case use that, or summing different 
numeric types is illegal, in which case trying to use the view would fail.

> It seems to me that the most consistent answer is that the "type" of
> columns in a VIEW should always be an empty string.

That's only consistent if you do the same thing with base tables.

Alternately, you can exploit the explicit column list that is optional in a 
CREATE VIEW:

   CREATE VIEW v3 (foo NUMERIC) AS SELECT w+x+y+z AS foo FROM t2;

So if a VIEW definition uses that syntax, that's what the returned column type 
is; otherwise the returned column type is the empty string.

Personally, I think using the expression value type is the best though, and 
works without any schema changes.

-- Darren Duncan



[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-04 Thread Darren Duncan
On 2016-03-03 11:27 AM, James K. Lowden wrote:
> On Thu, 3 Mar 2016 10:43:26 +0800 (CST)
> ??  wrote:
>
>>> Can anyone describe a situation for which this style of LIMIT &
>>> OFFSET is advisable from the application's point of view?  (The
>>> DBMS costs are obvious enough.)
>>
>> For me this is the easiest way to implement a scrolling cursor.
>> Otherwise I will have to record for each table the condition I use
>> to  scroll it. So if there is an easier way to implement the
>> scrolling cursor please let me know.
>
> You say, "record ... the condition ... to  scroll [the table]".  I'm
> sure I don't know what "condition" you mean.
>
> You appear to be doing something like:
>
>   offset = 0
>   do
>   sqlte3_prepare (limit, offset)
>   do
>   sqlite3_step
>   until SQLITE_DONE
>   offset += limit
>   ... application stuff ...
>   until last row processed
>
> The obvious, better alternative is
>
>   sqlte3_prepare
>   do
>   do
>   sqlite3_step
>   until LIMIT
>   ... application stuff ...
>   until last row processed
>
> because in the second example you're using SQLite's cursor to "record
> the condition of the table".  There's nothing for you to keep between
> "pages"; you just start from where you stopped.
>
> How exactly is the first way "easiest"?
>
> --jkl

If these are pages displayed to the user, they may want to scroll backwards at 
some point; I don't expect that sqlite3_step is bidirectional, but if it is 
that's a pleasant surprise; the approaches using SQL can go backwards as easily 
as forwards, but really it depends on the use case. -- Darren Duncan




[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-03 Thread Darren Duncan
On 2016-03-02 11:18 PM, R Smith wrote:
> On 2016/03/03 5:10 AM, ?? wrote:
>> So according to your answers and others, this limitation is always there even
>> the document said "obsolete" ? Just want to double confirm.
>
> To add to what Clemens already mentioned - this problem is not an SQLite
> problem, this problem is true and exists for ALL SQL databases. It is bad to 
> use
> LIMIT and OFFSET for several reasons. Performance obviously, but also DB 
> changes
> may alter the position within an offset that a line appears, which can make 
> the
> user skip entire lines when scrolling to a next page, or the "Back" button 
> might
> produce a different view than he/she had before.
>
> If your DB is small and does not change often, then the above is all 
> negligible
> and you can use it. But for any performant or large or frequently updated DB, 
> it
> is a bad idea - in ALL SQL engines.
>
> Speaking of all engines - some allow a scrolling cursor and some allow 
> windowing
> functions that might be pressed into service mitigating the problem. It 
> remains
> however faster to use WHERE and LIMIT as Clemens suggested in all cases I 
> know of.
>
>
> Cheers,
> Ryan

Ryan, I think I was the one who actually previously said most of this stuff you 
are talking about, rather than Clemens, including about using WHERE and LIMIT 
and about consequences of OFFSET skipping or repeating lines due to other 
users' 
updates. -- Darren Duncan


[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-02 Thread Darren Duncan
Using OFFSET means you have to sort and count all the records you're skipping 
before getting to the ones you want, and using WHERE means you can avoid 
counting and only sort the ones you're not skipping.  That is how it is, 
regardless of what that document you saw says. -- Darren Duncan

On 2016-03-02 7:10 PM, ?? wrote:
> So according to your answers and others, this limitation is always there even 
> the document said "obsolete" ? Just want to double confirm.
>
>
> Thanks!
>
>
> Qiulang
>
>
> At 2016-03-03 11:06:08, "Darren Duncan"  wrote:
>> On 2016-03-02 6:48 PM, ?? wrote:
>>>> A better way that is very similar is to use WHERE and LIMIT instead.
>>>
>>> I know that but as I just replied to this thread, if I do that I will then 
>>> have to record each columns I use for each table I want scroll cursor. So 
>>> from the implementation point of view, using LIMIT & OFFSET is easier.
>>>
>>> Qiulang
>>
>> You have to record the columns anyway in order to know what you're sorting 
>> your
>> results by, this is just reuse.  Or if not, then maybe you should bite the
>> bullet and record that extra info.
>>
>> If you don't want to do that in order to simplify things, then you live with 
>> the
>> limitations and sluggishness of LIMIT and OFFSET, as those limitations are
>> systemic to LIMIT and OFFSET.
>>
>> -- Darren Duncan
>>
>>> At 2016-03-03 10:42:37, "Darren Duncan"  wrote:
>>>> On 2016-03-02 5:02 AM, ?? wrote:
>>>>> Here... said "Do not try to implement a scrolling window using LIMIT and 
>>>>> OFFSET. Doing so will become sluggish as the user scrolls down toward the 
>>>>> bottom of the list.?. But the page also said "This information is 
>>>>> obsolete?
>>>>> ... talks about LIMIT & OFFSET, without mentioning that is a bad idea. So 
>>>>> my question is can I do that or not (will it become sluggish if I do 
>>>>> that) ?
>>>>
>>>> Using LIMIT and OFFSET is generally a bad idea, not just on performance 
>>>> but also
>>>> logically.
>>>>
>>>> A better way that is very similar is to use WHERE and LIMIT instead.
>>>>
>>>> Assuming that you are going through pages consecutively, you know what rows
>>>> you've already seen, particularly in the prior page.
>>>>
>>>> Whatever columns you are sorting your result by, take the last row just 
>>>> seen and
>>>> the query for the next page is found by saying WHERE > or < etc the field 
>>>> values
>>>> for that last row.
>>>>
>>>> So you're sure to get the rows just after the ones you just saw, and later 
>>>> pages
>>>> shouldn't be any slower than earlier ones.
>>>>
>>>> This approach is also resilient to arbitrary changes to the database 
>>>> between
>>>> page views so you don't either repeat rows or skip rows due to offset 
>>>> mismatch.
>>>>
>>>> -- Darren Duncan
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>



[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-02 Thread Darren Duncan
On 2016-03-02 6:48 PM, ?? wrote:
>> A better way that is very similar is to use WHERE and LIMIT instead.
 >
> I know that but as I just replied to this thread, if I do that I will then 
> have to record each columns I use for each table I want scroll cursor. So 
> from the implementation point of view, using LIMIT & OFFSET is easier.
>
> Qiulang

You have to record the columns anyway in order to know what you're sorting your 
results by, this is just reuse.  Or if not, then maybe you should bite the 
bullet and record that extra info.

If you don't want to do that in order to simplify things, then you live with 
the 
limitations and sluggishness of LIMIT and OFFSET, as those limitations are 
systemic to LIMIT and OFFSET.

-- Darren Duncan

> At 2016-03-03 10:42:37, "Darren Duncan"  wrote:
>> On 2016-03-02 5:02 AM, ?? wrote:
>>> Here... said "Do not try to implement a scrolling window using LIMIT and 
>>> OFFSET. Doing so will become sluggish as the user scrolls down toward the 
>>> bottom of the list.?. But the page also said "This information is obsolete?
>>> ... talks about LIMIT & OFFSET, without mentioning that is a bad idea. So 
>>> my question is can I do that or not (will it become sluggish if I do that) ?
>>
>> Using LIMIT and OFFSET is generally a bad idea, not just on performance but 
>> also
>> logically.
>>
>> A better way that is very similar is to use WHERE and LIMIT instead.
>>
>> Assuming that you are going through pages consecutively, you know what rows
>> you've already seen, particularly in the prior page.
>>
>> Whatever columns you are sorting your result by, take the last row just seen 
>> and
>> the query for the next page is found by saying WHERE > or < etc the field 
>> values
>> for that last row.
>>
>> So you're sure to get the rows just after the ones you just saw, and later 
>> pages
>> shouldn't be any slower than earlier ones.
>>
>> This approach is also resilient to arbitrary changes to the database between
>> page views so you don't either repeat rows or skip rows due to offset 
>> mismatch.
>>
>> -- Darren Duncan



[sqlite] Can I implement a scrolling window using LIMIT and OFFSET ?

2016-03-02 Thread Darren Duncan
On 2016-03-02 5:02 AM, ?? wrote:
> Here... said "Do not try to implement a scrolling window using LIMIT and 
> OFFSET. Doing so will become sluggish as the user scrolls down toward the 
> bottom of the list.?. But the page also said "This information is obsolete?
> ... talks about LIMIT & OFFSET, without mentioning that is a bad idea. So my 
> question is can I do that or not (will it become sluggish if I do that) ?

Using LIMIT and OFFSET is generally a bad idea, not just on performance but 
also 
logically.

A better way that is very similar is to use WHERE and LIMIT instead.

Assuming that you are going through pages consecutively, you know what rows 
you've already seen, particularly in the prior page.

Whatever columns you are sorting your result by, take the last row just seen 
and 
the query for the next page is found by saying WHERE > or < etc the field 
values 
for that last row.

So you're sure to get the rows just after the ones you just saw, and later 
pages 
shouldn't be any slower than earlier ones.

This approach is also resilient to arbitrary changes to the database between 
page views so you don't either repeat rows or skip rows due to offset mismatch.

-- Darren Duncan




[sqlite] Andl: choices for generic types: bool, binary, number/decimal, date/datetime, text/varchar

2016-02-08 Thread Darren Duncan
David, unless you're wanting to use SQLite's built-in datetime operators, then 
just encode yours somehow and put them in another field type, and decode them 
on 
retrieval into your own datetime types.  Depending what you encode them as, 
pick 
the appropriate built-in type. -- Darren Duncan

On 2016-02-08 5:00 PM, david at andl.org wrote:
> Having read and understood the documentation on Sqlite data types, I'm really 
> just looking for a single recommendation on which choices to make.
>
> I need to store generic data in 5 types: bool, binary, number/decimal, 
> text/nvarchar, time/date/datetime. Decimal has more than 15 digits of 
> precision. Text is Unicode. Time is years - with fractional seconds.
>
> For each type I need to choose:
>
> 1. Type name (string) to use in CREATE TABLE.
> 2. Affinity.
> 3. Datatype that will be returned by C API calls
> 4. Which C API call to use to get and put data values.
>
> My current choices are:
> Bool: TINYINT, INTEGER, value_int
> Binary: BLOB, BLOB, value_blob
> Number: NUMERIC, NUMERIC, value_???
> Text: TEXT, TEXT, Encoding utf-8, value_text
> Time: DATETIME, NUMERIC, value_???.
>
> The first two are easy enough. No problems.
>
> Number: should I choose text functions like sqlite3_value_text() and do all 
> my own conversions, or is there benefit in using a different function 
> according to the storage class/data type? Or is sqlite3_value_numeric_type() 
> the way to go?
>
> Text: I assume this just works, and all text values are UTF-8 by default?
>
> Time: the only supported DATETIME format seems to be ISO8601, which has no 
> explicit C API support? That looks like a lot of conversion overhead for 
> something that is easily stored in a 64-bit integer. What would 
> sqlite3_value_numeric_type() do?
>
> [Has there been any consideration of extending the range of types to include 
> decimal and a binary datetime?]
>
> Sorry if it's a bit scrappy, but I just need to make some choices and then go 
> away and write the code.
>
> Regards
> David M Bennett FACS
>
> Andl - A New Database Language - andl.org



[sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-01-08 Thread Darren Duncan
Okay, I think this clears some things up.

On 2016-01-08 11:36 AM, Warren Young wrote:
> On Jan 8, 2016, at 12:39 AM, Darren Duncan  wrote:
>>
>> I interpreted your request as if current systems' error outputs at execute 
>> time were printing out the problematic SQL statement with placeholder names 
>> as originally prepared, and you wanted the error outputs to have the 
>> placeholders substituted with literals for the values passed to them at 
>> execute time interpolated into them.
>
> Yes.
>
>> one can just list the bound values separately / afterwards rather than 
>> having to rewrite the SQL to interpolate those values.
>
> Of course, but the question is not whether a caller *can* do this, it?s 
> whether the caller *should have to* do this.
>
>  From the caller?s perspective, it has already passed ownership of the values 
> off to SQLite via sqlite3_bind_*().  If an error occurs, the bind calls may 
> be a few levels separate from the sqlite3_step() call that actually causes 
> the error.  SQLite still owns the values, though, and could provide them in a 
> hypothetical sqlite3_preview() call, which assembles the effective SQL it 
> tried to execute and failed.
>
> You?re asking the caller to maintain separate ownership of data that SQLite 
> needs just for the error case.  SQLite has a much better reason to have a 
> copy of that data, so it should be providing the values to the error handler, 
> not the caller.

Actually, I agree with you that SQLite should be providing everything.  What I 
disagree with is making the interpolation necessary.  The hypothetical 
sqlite3_preview() should output 2 things, the SQL as originally passed to 
prepare with placeholders intact, plus a list of placeholder names and their 
bound values that failed.

>> If your problem is that the output simply says an error occurred and doesn't 
>> print out the SQL, then fair enough, I misunderstood you.
>
> It?s a bit more than that.  The problem is that a given prepared statement is 
> necessarily generic.  Just from looking at the statement in a log file, you 
> can?t tell what values were used with it, which would help you understand the 
> context in which it was used.

To further explain, I see SQL as a programming language same as C or Perl or 
whatever.

Calling prepare is effectively invoking a compiler on SQL source code where 
that 
SQL source defines a routine that may have parameters.  Calling execute is then 
asking to execute that compiled routine where the bind parameters are the 
runtime-provided arguments to the routine.

Do you think it makes sense in any other common programming language that, if a 
routine fails with a particular set of arguments, that the debugging message 
includes say C source code rewritten to substitute literals where references to 
its parameters were?  Or does it make more sense for the debugging message to 
print the actual routine source plus a list of the passed argument values?  I 
am 
arguing for the latter, all done by SQLite.

I consider what is reasonable for SQL to be the same as for other languages.

-- Darren Duncan



[sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-01-08 Thread Darren Duncan
On 2016-01-08 8:08 AM, Stephen Chrzanowski wrote:
> For the record, *I* personally prefer trying to get all essential resources
> built directly into my final output (For SQLite, default database
> structures, SQLite strings, and maybe that one day, SQLite itself), that
> way I'm in control of what the application does, and have no reliance on a
> 3rd party update to a resource file that breaks my code.  That is just my
> preference, and old school or not, I prefer working software, not software
> that might work after MySQL updates and breaks a resource I require when my
> application doesn't touch MySQL, or when a user deletes a critical file my
> application requires and claims they didn't do anything   I've
> never had 100% success on a fully independent database driven application
> (SQLite or not), and that is perfectly OK.  That doesn't mean I'd like to
> strive for that one day.

You are or seem to be talking about 2 different things in this thread.

I very much agree with you that it is reasonable for an APPLICATION to bundle 
its key dependent libraries in ITS executable so the proper functioning of the 
application is insulated against many changes to system-provided or separately 
installed libraries.  Especially today with abundant disk space.

But what you seemed to be arguing for before was that a programmer tool for 
making applications, that is Perl itself or R itself or what have you should be 
bundling SQLite with it, and this I disagree with.

The user base of programming language environments is programmers who are 
making 
applications, and it should be those users' decision to bundle SQLite with 
their 
application, and not having it forced on them by the creator of the programming 
language to include SQLite with all applications regardless of whether it is 
used or not.

Apples and oranges.

-- Darren Duncan



[sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-01-07 Thread Darren Duncan
Stephen,

What you are arguing for (no shared libraries) is bad old days where one had to 
recompile their programming language to add support for a DBMS, rather than the 
DBMS support being a separately installable library that one could choose to 
install or not or upgrade semi-independently or not, or choose to use an 
alternative or not.

Sure, SQLite is public domain, but why should every language bundle it into 
their core just because?  There are lots of other useful libraries one could 
make the same argument for.  Bundling it can make sense if the language core 
itself depends on SQLite or practically all of its users would use it, but 
that's not usually the case.

I should also point out that the standard Perl interface for SQLite, the 
DBD::SQLite module, bundles the SQLite source with it, so installing that Perl 
library gives you SQLite itself, there are no DLLs or dependence on some system 
SQLite library, but Perl itself doesn't have this built-in nor should it.

In the Perl 4 days you had to recompile Perl to make a version that can talk to 
a DBMS, eg "Oraperl", but thankfully with Perl 5 (1994 or so) we did away with 
that.

-- Darren Duncan

On 2016-01-07 5:47 PM, Stephen Chrzanowski wrote:
> I personally wish the reverse.  I wish that these interpreted language
> engines would incorporate the SQLite code directly into their own existence
> to avoid having to write wrappers to begin with, except for those wrappers
> where their method name is "DatabaseOpen" and I prefer "OpenDatabase".
>
> SQLite has been around for years, and "R", PHP, Java, Perl, and all these
> other interpreted new and old style languages have never bothered to
> incorporate this public domain database engine within itself.  It isn't
> like the maintainers of these languages don't know it doesn't exist, and if
> they didn't, then my god they gotta get out from under that rock.  Most web
> browsers use SQLite for crying out loud.
>
> For a few years, I've considered taking the entire amalgamation and porting
> it to Pascal (Delphi/FPC) so I have exactly zero reliance on DLLs.  No
> worries about OBJ files, no worries about dependencies, I just include a
> unit and my app is now database aware.  I know 386 assembly, and I can
> always read up on other specifications if I needed to.  My problem is that
> gaming gets in the way.
>
> My 2016 wish list for SQLite is that all developers who write for, or use
> directly or indirectly, any database engine out on the market has a safe
> and happy 2016 and beyond.



[sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-01-07 Thread Darren Duncan
Perhaps we misunderstand each other here.

I interpreted your request as if current systems' error outputs at execute time 
were printing out the problematic SQL statement with placeholder names as 
originally prepared, and you wanted the error outputs to have the placeholders 
substituted with literals for the values passed to them at execute time 
interpolated into them.

And so I was saying that continuing to print out the SQL with placeholders was 
correct, and that one can just list the bound values separately / afterwards 
rather than having to rewrite the SQL to interpolate those values.

If your problem is that the output simply says an error occurred and doesn't 
print out the SQL, then fair enough, I misunderstood you.

If my assessment of what you said matches what you intended, then your analogy 
with C is flawed / apples and oranges with what I was talking about.

-- Darren Duncan

On 2016-01-07 5:55 PM, Warren Young wrote:
> On Jan 7, 2016, at 6:04 PM, Darren Duncan  wrote:
>>
>> On 2016-01-07 4:55 PM, Warren Young wrote:
>>> 2. There is no ?preview? mechanism.
>>
>> The current method of binding is correct.  All we really need is that the 
>> debug logging layer include both the SQL of the prepared statement AND a 
>> list of the bound values when the execute failed
>
> By that logic, it would be okay to design a C compiler that emitted only line 
> numbers as error numbers, and gave those line numbers as cpp(1) output line 
> numbers, not source input line numbers.
>
> That is, if stdio.h is 5kSLOC and you make an error on line 5 of your hello 
> world program, it should complain, ?hello.c:5005: error?.
>
> After all, the programmer has all the information necessary to subtract out 
> the #included files? offsets, and then go look at line 5 in the program to 
> determine what went wrong.
>
> SQLite error handling should improve the same way our C and C++ compilers 
> have.
>
> Given: include (missing ?#"!)
>
> Ancient Unix V7 cc says: hello.c:1: Expression syntax.  Yes, very helpful.  
> (Not!)
>
> pcc on the same box spits out about half a dozen errors for that line, none 
> of which tell you what is wrong.
>
> gcc 4 says:
>
> hello.c:1: error: expected ?=?, ?,?, ?;?, ?asm? or ?__attribute__? before 
> ?
> It?s not as pointlessly noisy as pcc, but it?s still not terribly useful.
>
> clang says:
>
>hello.c:1:1: error: unknown type name 'include'
>include 
>^
>
> The arrow points you right at the error.
>
> Wouldn?t it be nice if SQLite were more like clang in this regard?



[sqlite] Wish List for 2016: High Level API for Object Oriented Interactive Languages

2016-01-07 Thread Darren Duncan
On 2016-01-07 4:55 PM, Warren Young wrote:
> 2. There is no ?preview? mechanism.  That is, you can?t bind some parameters 
> to a prepared query string and then get the resulting SQL because SQLite 
> substitutes the values into the query at a layer below the SQL parser.  This 
> means that if you have an error in your SQL syntax or your parameters cause a 
> constraint violation, your debug logging layer can only log the prepared 
> query string, not the parameters that went into it, which makes it 
> unnecessarily difficult to determine which code path caused you to get the 
> error when looking at logs of a running system.

The current method of binding is correct.  All we really need is that the debug 
logging layer include both the SQL of the prepared statement AND a list of the 
bound values when the execute failed, and optionally a stack trace.  As for 
problems due to syntax errors, those presumably would be caught at prepare time 
or have nothing to do with the execute time values anyway as syntax errors are 
by definition a SQL syntax problem. -- Darren Duncan



  1   2   3   4   5   6   >