[sqlite] No datasize field - why?

2016-04-22 Thread Igor Korot
Hi, Simon,

On Fri, Apr 22, 2016 at 9:23 AM, Simon Slavin  wrote:
>
> On 22 Apr 2016, at 2:09pm, Igor Korot  wrote:
>
>> The field type is set to be "varchar(100)" for the name field.
>> Wouldn't it be more logical to have it as "varchar" and have another field
>> for data size?
>
> What you are seeing there is SQLite just repeating back the type that the 
> CREATE TABLE command used.  It is not the type of data SQLite is actually 
> storing.
>
> There is no 'varchar' datatype in SQLite.  And it never truncates strings.
>
> I think that the best thing for you may be to read this page
>
> 

I know SQLite goes by the affinity and not by the type. Something like
scripting language - nothing is typed.

However, what I'm saying is that most DBMS out there do store the size
of the field.

Even though I issue something like:

CREATE TABLE test( id INTEGER, PRIMARY KEY, test1 VARCHAR(25), test2
INTEGER(3), test3 FLOAT(25,2) );

if I ask for the columns information, I will get:

idINTEGER
test1   VARCHAR  25  0
test2INTEGER 3  0
test2FLOAT   25  2

So instead of me parsing this info, it can be parsed by SQLite engine.

I understand that for the reason of backward compatibility it will not
be changed,
but it may be something to consider for the next major release (SQLite4).

Thank you.

>
> and get back to us if you have further questions.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pretty-printing. Was: huge difference between clustered/nonclustered index usage

2016-04-22 Thread Dominique Devienne
On Fri, Apr 22, 2016 at 7:05 PM, Richard Hipp  wrote:

> In the next release, the ".schema" command will support a command-line
> option "--indent" which does pretty-printing of the schema, turning
> the above into the example shown below.  The pretty-printer is simple
> but it seems to work.


Sounds good. Why not simply --pretty-print? longish, but obvious. --DD

PS: Given that table defs can contain complex expressions in CHECK
constraints,
is it expected to format those expressions too? (I'd love an SQLite API to
format
SQL in general, FWIW. I wouldn't care about the format, or its evolution.)

PPS: Would it indent lines if one uses an inline FK and long CHECK
constraints, or put it all on one line?
I tend to embed newlines in my table defs, which SQLite preserves of
course, but having a canonical and
official way to print table defs, which "overrides" that formatting, sounds
like a good idea to me.


[sqlite] huge difference between clustered/nonclustered index usage

2016-04-22 Thread Dominique Devienne
On Fri, Apr 22, 2016 at 6:55 PM, Richard Hipp  wrote:

> On 4/22/16, Dimitris Bilidas  wrote:
> > SELECT count(qview1."wlbWellboreName")  FROM "discovery" qview2 CROSS
> > JOIN  "wellbore_development_all" qview1   WHERE
> > (qview1."wlbNpdidWellbore" = qview2."wlbNpdidWellbore");
>
> You are aware that SQLite uses the CROSS JOIN syntax as a way of
> controlling the query planner, and limiting the number of options that
> the query planner looks at, right?  (See
> https://www.sqlite.org/optoverview.html#crossjoin for more
> information.)  I don't think this is important here, but it might be.


Thanks for the reminder. And very interesting. --DD

PS: Small typo: reads
"But algorithm-2 really is the correct choice here", but
"But algorithm-1 really is the correct choice here" is meant I believe.


[sqlite] No datasize field - why?

2016-04-22 Thread Darren Duncan
The general case of a data type definition is an arbitrarily complex predicate 
expression whose parts vary on the base type and other factors.  Given this, if 
component details of type definitions were split out into their own 
table_info() 
columns, you'd have a large number of columns where most are inapplicable in 
any 
given case, eg some only apply to strings, some only to numbers, etc.  And so, 
just expressing the type definition as a SQL fragment like table_info() 
currently does provides a compact generic representation with all the details, 
same as in CREATE TABLE. -- Darren Duncan

On 2016-04-22 6:09 AM, Igor Korot wrote:
> Hi, ALL,
>
> [code]
> SQLite version 3.9.2 2015-11-02 18:31:45
> Enter ".help" for usage hints.
> sqlite> PRAGMA table_info(league);
> sqlite> PRAGMA table_info(leagues);
> 0|id|integer|0||1
> 1|name|varchar(100)|0||0
> 2|drafttype|integer(1)|0||0
> 3|scoringtype|integer(1)|0||0
> 4|roundvalues|integer(1)|0||0
> 5|leaguetype|char(5)|0||0
> 6|salary|integer|0||0
> 7|benchplayers|integer(1)|0||0
> sqlite>
> [/code]
>
> The field type is set to be "varchar(100)" for the name field.
> Wouldn't it be more logical to have it as "varchar" and have another field
> for data size?
>
> Thank you.
>
> P.S.: I don't know how most databases interpret this, I know ODBC does it
> this way



[sqlite] huge difference between clustered/nonclustered index usage

2016-04-22 Thread Dimitris Bilidas
Hello,

I am using sqlite 3.12.2 and I am getting a strange behaviour from a 
very simple query. I am trying to understand the exact query plan and 
the cause of this problem. I would be grateful if someone could help me.

The query is:
SELECT count(qview1."wlbWellboreName")  FROM "discovery" qview2 CROSS 
JOIN  "wellbore_development_all" qview1   WHERE 
(qview1."wlbNpdidWellbore" = qview2."wlbNpdidWellbore");

I have indexes on columns wlbNpdidWellbore in both tables, but I have 
not declared any primary keys. On cold cache this query takes almost 
three minutes, whereas on warm cache it takes around 100 msec. If I 
replace qview1."wlbWellboreName" with * it runs in a couple of seconds 
on cold cache. If I create the table "wellbore_development_all" from the 
beginning setting wlbNpdidWellbore as primary key, the query runs in 
under 3 seconds on cold cache and in a couple of tens of milliseconds on 
warm cache.

I have executed analyze and vacuum on both tables before the query, both 
with clustered and non clustered indexes. I would normally expect that 
the difference would be very small: one more page fetch for each value 
of wlbNpdidWellbore that it is matched in the index of 
wellbore_development_all

I am pasting the output of .schema and stats for the two tables and also 
the output of explain for both cases. From the explain, if I understand 
correctly, I can see that we first search the whole 
"wellbore_development_all" table (2nd OpenRead) and then the index (3rd 
OpenRead). Is that correct? And if so, why is that?

Thanks,
Dimitris


discovery table:
sqlite> .schema
CREATE TABLE discovery( dscName TEXT,cmpLongName 
TEXT,dscCurrentActivityStatus TEXT,dscHcType TEXT,wlbName TEXT,nmaName 
TEXT,fldName TEXT,dscDateFromInclInField TEXT,dscDiscoveryYear 
INTEGER,dscResInclInDiscoveryName TEXT,dscOwnerKind TEXT,dscOwnerName 
TEXT,dscNpdidDiscovery INTEGER,fldNpdidField INTEGER,wlbNpdidWellbore 
INTEGER,dscFactPageUrl TEXT,dscFactMapUrl TEXT,dscDateUpdated 
TEXT,dscDateUpdatedMax TEXT,dateSyncNPD TEXT);
CREATE UNIQUE INDEX "discovery_dscNpdidDiscovery_pkey" ON "discovery" 
("dscNpdidDiscovery")
;
CREATE INDEX "IX368_discovery" ON "discovery"
(
"wlbNpdidWellbore"
)

;
CREATE INDEX "IX367_discovery" ON "discovery"
(
"fldNpdidField"
)

;

sqlite> select * from sqlite_stat1;
discovery|IX367_discovery|42100 5
discovery|IX368_discovery|42100 2
discovery|discovery_dscNpdidDiscovery_pkey|42100 1



wellbore_development_all table without primary key declaration:
sqlite> .schema
CREATE TABLE wellbore_development_all( wlbWellboreName TEXT,wlbWell 
TEXT,wlbDrillingOperator TEXT,wlbDrillingOperatorGroup 
TEXT,wlbProductionLicence TEXT,wlbPurposePlanned TEXT,wlbContent 
TEXT,wlbWellType TEXT,wlbEntryDate TEXT,wlbCompletionDate TEXT,wlbField 
TEXT,wlbDrillPermit TEXT,wlbDiscovery TEXT,wlbDiscoveryWellbore 
TEXT,wlbKellyBushElevation REAL,wlbFinalVerticalDepth REAL,wlbTotalDepth 
REAL,wlbWaterDepth REAL,wlbMainArea TEXT,wlbDrillingFacility 
TEXT,wlbFacilityTypeDrilling TEXT,wlbProductionFacility 
TEXT,wlbLicensingActivity TEXT,wlbMultilateral TEXT,wlbContentPlanned 
TEXT,wlbEntryYear INTEGER,wlbCompletionYear 
INTEGER,wlbReclassFromWellbore TEXT,wlbPlotSymbol 
INTEGER,wlbGeodeticDatum TEXT,wlbNsDeg INTEGER,wlbNsMin INTEGER,wlbNsSec 
REAL,wlbNsCode TEXT,wlbEwDeg INTEGER,wlbEwMin INTEGER,wlbEwSec 
REAL,wlbEwCode TEXT,wlbNsDecDeg REAL,wlbEwDesDeg REAL,wlbNsUtm 
REAL,wlbEwUtm REAL,wlbUtmZone INTEGER,wlbNamePart1 INTEGER,wlbNamePart2 
INTEGER,wlbNamePart3 TEXT,wlbNamePart4 INTEGER,wlbNamePart5 
TEXT,wlbNamePart6 TEXT,wlbFactPageUrl TEXT,wlbFactMapUrl 
TEXT,wlbDiskosWellboreType TEXT,wlbDiskosWellboreParent 
TEXT,wlbNpdidWellbore INTEGER,dscNpdidDiscovery INTEGER,fldNpdidField 
INTEGER,wlbWdssQcdate TEXT,prlNpdidProductionLicence 
INTEGER,fclNpdidFacilityDrilling INTEGER,fclNpdidFacilityProducing 
INTEGER,wlbNpdidWellboreReclass INTEGER,wlbDiskosWellOperator 
TEXT,wlbDateUpdated TEXT,wlbDateUpdatedMax TEXT,dateSyncNPD TEXT);
CREATE UNIQUE INDEX "wellbore_development_all_wlbNpdidWellbore_pkey" ON 
"wellbore_development_all" ("wlbNpdidWellbore")
;
CREATE INDEX "IX1608_wellbore_development_all" ON "wellbore_development_all"
(
"prlNpdidProductionLicence"
)

;
CREATE INDEX "IX1607_wellbore_development_all" ON "wellbore_development_all"
(
"fldNpdidField"
)

;
CREATE INDEX "IX1610_wellbore_development_all" ON "wellbore_development_all"
(
"wlbDiskosWellOperator"
)

;
CREATE INDEX "IX1605_wellbore_development_all" ON "wellbore_development_all"
(
"wlbDrillingOperator"
)

;
CREATE INDEX "IX1606_wellbore_development_all" ON "wellbore_development_all"
(
"dscNpdidDiscovery"
)

;
CREATE INDEX "IX1609_wellbore_development_all" ON "wellbore_development_all"
(
"wlbNpdidWellboreReclass"
)

;
sqlite> select * from sqlite_stat1;
wellbore_development_all|IX1609_wellbore_development_all|389400 78
wellbore_development_all|IX1606_wellbore_development_all|389400 32
wellbore_development_all|IX1605_wellbore_development_all|389400 126

[sqlite] Data integrity in sqlite in qt

2016-04-22 Thread kuppesh
Hi Users,

Could you please guide me to understand data integrity in sqlite in qt
framework?

Regards,
Kuppappa
M.8747801297.
E-Mail:kuppesh.ds at gmail.com


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Christian Werner
On 04/22/2016 03:46 PM, Richard Hipp wrote:

> Why isn't /var/tmp or /tmp usable on Android?

There ain't no "/var/tmp" nor "/tmp" on droids. Best of all worst alternatives
is to use the application's own directory or better the subdir "cache" therein.

Best,
Christian



[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Rowan Worth
On 22 April 2016 at 16:00, Cecil Westerhof  wrote:

> What I find very interesting is that the user time and the sys time does
> not increase significantly, but the real time does. Does this point to the
> problem, or is this to be expected?
>

It suggests the extra time is spent waiting for I/O (user/sys are measures
of cpu time).
-Rowan


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Martin Trnovec
Yes looks like that "." folder is correctly detected to not have access 
and sqlite returns  "error: SQLITE_IOERR: disk I/O error" which confirms 
the fix

but still is there a preffered way how to set directory for those 
transition files , should we use env "SQLITE_TMPDIR" or is there a 
better way?

kr
Martin

D?a 22.04.2016 o 15:22 Richard Hipp nap?sal(a):
> On 4/22/16, Martin Trnovec  wrote:
>> It seems that the problem is/was that the application can't create
>> transient files in the "." directory returned by
>> os_unix.c::unixTempFileDir due to Android os file restrictions.
> Does the fix at https://www.sqlite.org/src/info/67985761aa93fb61 help?
>
>> It also
>> looks like that this function is not able to correctly check if the
>> folder is writable for transient files , there are some check but all
>> off them passed. Setting the correct temp directory helped but
>>
>> Is there a preffered way how to set a user temp director for sqlite ?
>>
>> I saw some chances via env variable SQLITE_TMPDIR but not sure if that's
>> the best practise ?
>>
>> kr
>> Martin
>>
>> D?a 22.04.2016 o 13:09 Richard Hipp nap?sal(a):
>>> On 4/22/16, Martin Trnovec  wrote:
 Hello,

 we are using sqlite 3.12.1 on Android device and we are tring to copy
 content of the one table into another table using

 INSERT OR REPLACE INTO  SELECT * FROM 

 This commad will fail on Android with error code SQLITE_CANTOPEN when
  -  has any triggers configured (also empty one like
 "select 1" )
  - and the amount of row to be inserted is larger (currently it fails
 if it's 9226 rows but not with 9225 rows)
>>> My guess: unable to open a transient file to store the statement journal.
>>>
>>> Have you tried activating the error and warning log for more details:
>>> https://www.sqlite.org/errlog.html
>>>
 the same code works perfectly on Windows or Mac OS o iOS, when we
 dropped the trigger it works also perfectly on Android ?

 Any hints ?

 Martin

 ___
 sqlite-users mailing list
 sqlite-users at mailinglists.sqlite.org
 http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

>>
>> --
>> ---
>> Mgr Martin Trnovec
>> Head of development
>>
>> Pipelinersales Inc.
>> R?ntgenova 26, 851 01 Bratislava, Slovakia
>>
>> @: martin.trnovec at pipelinersales.com | www.pipelinersales.com
>> ---
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>


-- 
---
Mgr Martin Trnovec
Head of development

Pipelinersales Inc.
R?ntgenova 26, 851 01 Bratislava, Slovakia

@: martin.trnovec at pipelinersales.com | www.pipelinersales.com
---



[sqlite] No datasize field - why?

2016-04-22 Thread Dominique Devienne
On Fri, Apr 22, 2016 at 3:09 PM, Igor Korot  wrote:

> [code]
> SQLite version 3.9.2 2015-11-02 18:31:45
> Enter ".help" for usage hints.
> sqlite> PRAGMA table_info(leagues);
> 0|id|integer|0||1
> 1|name|varchar(100)|0||0
> 2|drafttype|integer(1)|0||0
> 3|scoringtype|integer(1)|0||0
> 4|roundvalues|integer(1)|0||0
> 5|leaguetype|char(5)|0||0
> sqlite>
> [/code]
>
> The field type is set to be "varchar(100)" for the name field.
> Wouldn't it be more logical to have it as "varchar" and have another field
> for data size?
>

Sometimes there's the precision too, for non-integer numbers, so that would
be two "fields" for type-related "sizes".
Given that SQLite doesn't care about sizes, and has no limits (beside
https://www.sqlite.org/limits.html), it only shows you what you specified
textually I guess.
I'd welcome a size, and precision field too, but I doubt they'd come.

But that brings up another interesting question I never thought about. Is
there a way to determine the type affinity of a column?

PRAGMA table_info obviously doesn't show it. There's typeof() for values,
but what about the actual column's affinity?
There's doc in https://www.sqlite.org/datatype3.html section 2.1, but is
there no actual "programmatic" way to get it?

Thanks, --DD


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Martin Trnovec
It seems that the problem is/was that the application can't create 
transient files in the "." directory returned by 
os_unix.c::unixTempFileDir due to Android os file restrictions. It also 
looks like that this function is not able to correctly check if the 
folder is writable for transient files , there are some check but all 
off them passed. Setting the correct temp directory helped but

Is there a preffered way how to set a user temp director for sqlite ?

I saw some chances via env variable SQLITE_TMPDIR but not sure if that's 
the best practise ?

kr
Martin

D?a 22.04.2016 o 13:09 Richard Hipp nap?sal(a):
> On 4/22/16, Martin Trnovec  wrote:
>> Hello,
>>
>> we are using sqlite 3.12.1 on Android device and we are tring to copy
>> content of the one table into another table using
>>
>> INSERT OR REPLACE INTO  SELECT * FROM 
>>
>> This commad will fail on Android with error code SQLITE_CANTOPEN when
>> -  has any triggers configured (also empty one like
>> "select 1" )
>> - and the amount of row to be inserted is larger (currently it fails
>> if it's 9226 rows but not with 9225 rows)
> My guess: unable to open a transient file to store the statement journal.
>
> Have you tried activating the error and warning log for more details:
> https://www.sqlite.org/errlog.html
>
>> the same code works perfectly on Windows or Mac OS o iOS, when we
>> dropped the trigger it works also perfectly on Android ?
>>
>> Any hints ?
>>
>> Martin
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>


-- 
---
Mgr Martin Trnovec
Head of development

Pipelinersales Inc.
R?ntgenova 26, 851 01 Bratislava, Slovakia

@: martin.trnovec at pipelinersales.com | www.pipelinersales.com
---



[sqlite] BUG?

2016-04-22 Thread Rowan Worth
On 22 April 2016 at 14:54, Stephan Beal  wrote:

> On Fri, Apr 22, 2016 at 8:43 AM, Clemens Ladisch 
> wrote:
> >  sqlite> select julianday('2000-01-01 00:00:00');
> > ...> select julianday('2000-01-01 00:00:01');
> > ...> select julianday('2000-01-01 00:00:02');
> >  2451544.5
> >  2451544.50001157
> >  2451544.50002315
>
> but i beg to differ that that works in 100% of cases.
>

Lets see, for a 64-bit float we have 53 bits of significand. The number
before the decimal point (2,451,544) accounts for 22 bits, so we have 31
left for the fraction. ie. the precision we can represent is on the order
of 2^-31 (around 4.6566e-10). To represent one second in a julian day, we
need precision of 1 / 86,400 (around 1.1574e-5).

So we have plenty of precision available, definitely enough to support
milliseconds.

On julianday 4,194,304 (6771-07-07 12:00:00) we'll lose one bit of
precision, but there's still enough for milliseconds. It's at least a
million years before the fraction is imprecise enough to cause a *second*
of ambiguity. Pretty sure your conversion issues are not the fault of the
julian day format :)

-Rowan


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Simon Slavin

On 22 Apr 2016, at 2:39pm, Martin Trnovec  wrote:

> but still is there a preffered way how to set directory for those transition 
> files , should we use env "SQLITE_TMPDIR" or is there a better way?

Also, when is that variable read ?  Is it read in sqlite3_initialize(), so you 
have to set it before your first sqlite3_() call ?  Or is it read just before 
each temporary file is created ?

Simon.


[sqlite] No datasize field - why?

2016-04-22 Thread Simon Slavin

On 22 Apr 2016, at 2:09pm, Igor Korot  wrote:

> The field type is set to be "varchar(100)" for the name field.
> Wouldn't it be more logical to have it as "varchar" and have another field
> for data size?

What you are seeing there is SQLite just repeating back the type that the 
CREATE TABLE command used.  It is not the type of data SQLite is actually 
storing.

There is no 'varchar' datatype in SQLite.  And it never truncates strings.

I think that the best thing for you may be to read this page



and get back to us if you have further questions.

Simon.


[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Cecil Westerhof
2016-04-22 14:06 GMT+02:00 E.Pasma :

>
> 22 apr 2016, Cecil Westerhof:
>
>>
>> ?With createBigTable.sh ...
>>
> Can you paste the svript in the message? Attachments are not sent.
>

?createBigTable.sh:
#/usr/bin/env bash

# An error should terminate the script
# An unset variable is also an error
set -o errexit
set -o nounset


declare -r INSERT_TEMPLATE="INSERT INTO testUniqueUUIDBig
SELECT uuid, %d FROM testUniqueUUID
;
"
declare -r NR_OF_COPIES=10

declare insert=""


function getInsertStr {
printf "${INSERT_TEMPLATE}" "${1}"
}


for i in $(seq "${NR_OF_COPIES}") ; do
insert+="$(getInsertStr ${i})
"
done

sqlite3 checkUUID.sqlite <

[sqlite] Data integrity in sqlite in qt

2016-04-22 Thread Simon Slavin

On 22 Apr 2016, at 12:33pm, kuppesh  wrote:

> Could you please guide me to understand data integrity in sqlite in qt
> framework?

Can you ask your question another way ?  Perhaps use simpler terms than 'data 
integrity' ?

Are you worried about your data being lost ?

Are you trying to make sure two data structures never disgree with one-another ?

Simon.


[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread E.Pasma

22 apr 2016, Cecil Westerhof:
>
> ?With createBigTable.sh ...
Can you paste the svript in the message? Attachments are not sent.
Regards, E.Pasma


[sqlite] Pretty-printing. Was: huge difference between clustered/nonclustered index usage

2016-04-22 Thread Richard Hipp
On 4/22/16, Dimitris Bilidas  wrote:
> CREATE TABLE discovery( dscName TEXT,cmpLongName
> TEXT,dscCurrentActivityStatus TEXT,dscHcType TEXT,wlbName TEXT,nmaName
> TEXT,fldName TEXT,dscDateFromInclInField TEXT,dscDiscoveryYear
> INTEGER,dscResInclInDiscoveryName TEXT,dscOwnerKind TEXT,dscOwnerName
> TEXT,dscNpdidDiscovery INTEGER,fldNpdidField INTEGER,wlbNpdidWellbore
> INTEGER,dscFactPageUrl TEXT,dscFactMapUrl TEXT,dscDateUpdated
> TEXT,dscDateUpdatedMax TEXT,dateSyncNPD TEXT);

In the next release, the ".schema" command will support a command-line
option "--indent" which does pretty-printing of the schema, turning
the above into the example shown below.  The pretty-printer is simple
but it seems to work.  I would appreciate it if readers of this list
would build the latest trunk version of SQLite, try out this new
featuers on their own schemas, and send me examples of cases where it
falls down, so that it can be improved.  Also, suggestions for a
better option name (other than "--indent") are welcomed.

CREATE TABLE discovery(
  dscName TEXT,
  cmpLongName TEXT,
  dscCurrentActivityStatus TEXT,
  dscHcType TEXT,
  wlbName TEXT,
  nmaName TEXT,
  fldName TEXT,
  dscDateFromInclInField TEXT,
  dscDiscoveryYear INTEGER,
  dscResInclInDiscoveryName TEXT,
  dscOwnerKind TEXT,
  dscOwnerName TEXT,
  dscNpdidDiscovery INTEGER,
  fldNpdidField INTEGER,
  wlbNpdidWellbore INTEGER,
  dscFactPageUrl TEXT,
  dscFactMapUrl TEXT,
  dscDateUpdated TEXT,
  dscDateUpdatedMax TEXT,
  dateSyncNPD TEXT
);


-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] huge difference between clustered/nonclustered index usage

2016-04-22 Thread Richard Hipp
On 4/22/16, Dimitris Bilidas  wrote:
>
> The query is:
> SELECT count(qview1."wlbWellboreName")  FROM "discovery" qview2 CROSS
> JOIN  "wellbore_development_all" qview1   WHERE
> (qview1."wlbNpdidWellbore" = qview2."wlbNpdidWellbore");

You are aware that SQLite uses the CROSS JOIN syntax as a way of
controlling the query planner, and limiting the number of options that
the query planner looks at, right?  (See
https://www.sqlite.org/optoverview.html#crossjoin for more
information.)  I don't think this is important here, but it might be.

>
> If I
> replace qview1."wlbWellboreName" with * it runs in a couple of seconds
> on cold cache. If I create the table "wellbore_development_all" from the
> beginning setting wlbNpdidWellbore as primary key, the query runs in
> under 3 seconds on cold cache and in a couple of tens of milliseconds on
> warm cache.

This sounds to me like you have already solved your problem, no?  Just
use an INTEGER PRIMARY KEY rather than a separate index.

>
> I would normally expect that
> the difference would be very small: one more page fetch for each value
> of wlbNpdidWellbore that it is matched in the index of
> wellbore_development_all

A single page fetch can take a long time when the cache is cold.  The
appropriate disk sector needs to rotate under the read head, which on
a 3600 RPM disk drive takes an average of 8 millisecond.  Multiply by
the number of pages that need to be fetched (hundreds of thousands
according to your ANALYZE stats) and that can be slow.
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Martin Trnovec
Hello,

we are using sqlite 3.12.1 on Android device and we are tring to copy 
content of the one table into another table using

INSERT OR REPLACE INTO  SELECT * FROM 

This commad will fail on Android with error code SQLITE_CANTOPEN when
   -  has any triggers configured (also empty one like 
"select 1" )
   - and the amount of row to be inserted is larger (currently it fails 
if it's 9226 rows but not with 9225 rows)

the same code works perfectly on Windows or Mac OS o iOS, when we 
dropped the trigger it works also perfectly on Android ?

Any hints ?

Martin



[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Richard Hipp
On 4/22/16, Rowan Worth  wrote:
>
> I've written this under the presumption that sqlite touches every database
> page that was associated with a table during the delete/drop... I can think
> of some optimisations allowing much of the i/o to be skipped (at least
> least when secure_delete isn't set), but I'm not sure exactly what sqlite
> does.
>

It does the optimizations.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Cecil Westerhof
2016-04-22 10:12 GMT+02:00 Rowan Worth :

> On 22 April 2016 at 16:00, Cecil Westerhof  wrote:
>
> > What I find very interesting is that the user time and the sys time does
> > not increase significantly, but the real time does. Does this point to
> the
> > problem, or is this to be expected?
> >
>
> It suggests the extra time is spent waiting for I/O (user/sys are measures
> of cpu time).
>

?OK, so that is expected/normal?

-- 
Cecil Westerhof


[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Cecil Westerhof
2016-04-21 8:16 GMT+02:00 Cecil Westerhof :

>
> 2016-04-21 7:50 GMT+02:00 Cecil Westerhof :
>
>> ?I think it is an edge case. On my real system I only got this when there
>> where 1E8 records. I am now testing on very old (8 year) hardware to and
>> from work.
>> The processor is:
>> Intel(R) Atom(TM) CPU N270   @ 1.60GHz
>> with 1 GB of RAM. I am now generating a database with 1E6 records (will
>> take some time) and will check then with the small database. I expect that
>> I then will not see this kind of things. That would explain why nobody sees
>> something.
>>
>
> ?As I expected: with a tenth of the records, the performance is as
> expected: DROP is faster as DELETE and DROP. (Even faster as the DELETE.)
>
>
> But I am afraid that uploading a file from 4 GB could be a challenge.
>>
>
> ?I have an idea how to get this done, but I have to try some things out.
> That is a little difficult while traveling. ;-)
>

?Well, I did it. And I want to share the results. I think I found a way to
get the results reproduced on a ?normal? system.?


?With createBigTable.sh it is possible to create the big table needed to
reproduce the problem. In my case multiplication with a factor ten was
enough, but when more is needed the script is easily modified. (Just change
NR_OF_COPIES to 25 for example.)

What I find very interesting is that the user time and the sys time does
not increase significantly, but the real time does. Does this point to the
problem, or is this to be expected?


Then you have to cp checkUUID.sqlite to checkUUID.sqlite.bck and the
testing can begin.


For this I wrote performanceTest.sh. It does a drop for both tables and a
delete with drop for both tables. First with SECURE_DELETE=1 and then with
SECURE_DELETE=0
?
This shows that it is not only the size of the table, but also the size of
the database that is significant. The table which had no problems before
takes now also long. In all cases the DELETE and DROP is significantly
faster as the DROP only. The strange thing that the difference is bigger
for the small table as the big table.


I hope that it is now possible to reproduce the problem on ?normal?
systems. When more information is needed: let me know.

-- 
Cecil Westerhof


[sqlite] Multiple in-memory database table query

2016-04-22 Thread Dominique Devienne
On Thu, Apr 21, 2016 at 8:20 PM, Dan Kennedy  wrote:

>   Generally speaking, no matter how they are created, a given in-memory
>> database has one and only one connection.  You cannot, for example, use a
>> URI ?filename? with mode=memory to open the same in-memory database more
>> than once (I assume that?s what you mean by ?by name??).[...]
>>
>
Yes you can Jay, see Dan's answer.
And https://www.sqlite.org/sharedcache.html section 6.0:

Enabling shared-cache for an in-memory database allows two or more database
connections in the same process to have access to the same in-memory
database.
An in-memory database in shared cache is automatically deleted and memory
is reclaimed when the last connection to that database closes.


>> You can see that even though I?ve opened the same
>> ?file:data.db?mode=memory? database more than once, it is actually three
>> distinct databases.  I?m pretty sure that when mode=memory, the
>> path/filename are ignored.
>
>

> If you open the databases using "file:data.db?mode=memory=shared"
> then db1, db2 and db3 will all refer to the same in-memory database. Which
> will fail, as you cannot attach the same db to a single handle more than
> once.




> But, if your process has two separate database handles and they both
> attach "file:data.db?mode=memory=shared" then both handles will be
> connected to the same in-memory database.


This is exactly what we are doing. Now I keep on reading in this list
cache=shared is for constrained devices and one shouldn't use it ever.

But how else to use an in-memory database from multiple threads and
leverage table-level locking as opposed to DB-level locking?

We are heavy users of vtables and in-memory DBs, plural, within the same
process. And would like to have some //ism on tables.
I'd even wish for WAL-type MVCC for in-memory databases to avoid locks even
more. Is WAL mode supported for in-memory DBs?

What is "wrong" with shared cache that people keep saying don't use it?
Dan, Richard, is shared cache broken somehow?

Thanks, --DD


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Richard Hipp
On 4/22/16, Martin Trnovec  wrote:
>
> but still is there a preffered way how to set directory for those
> transition files , should we use env "SQLITE_TMPDIR" or is there a
> better way?

Setting SQLITE_TMPDIR is the preferred way.

Why isn't /var/tmp or /tmp usable on Android?

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Is it possible that dropping a big table takes very long

2016-04-22 Thread Adam Devita
In general, CPUs got much faster than disk IO a long time ago, so it
is expected that a single thread, write through to disk program would
have lots of time where the is CPU waiting for disk IO to complete.
(BTW: A common error of novice db programmers is using a disk based db
to store variables instead of handling them in memory, thus reducing a
L1 cache or register operation to the speed of a disk.)

The technology of caching is an attempt to win performance with
smaller, faster caches closer to the CPU, exploiting temporal or
(memory) spacial locality to not need to go all the way to the disk as
little as possible.  The list has more than a few discussions of
people using SSDs to increase performance or even caching, mission
critical, RAID controllers to win speed.

That said, why is the dropping of a table dependent on the size of
the table?   Does Sqlite have to mark every block of memory it used as
dropped?  (This is obvious for high security mode, but otherwise?)

regards,
Adam DeVita


On Fri, Apr 22, 2016 at 8:23 AM, Cecil Westerhof  
wrote:
> 2016-04-22 14:06 GMT+02:00 E.Pasma :
>
>>
>> 22 apr 2016, Cecil Westerhof:
>>
>>>
>>> With createBigTable.sh ...
>>>
>> Can you paste the svript in the message? Attachments are not sent.
>>
>
> createBigTable.sh:
> #/usr/bin/env bash
>
> # An error should terminate the script
> # An unset variable is also an error
> set -o errexit
> set -o nounset
>
>
> declare -r INSERT_TEMPLATE="INSERT INTO testUniqueUUIDBig
> SELECT uuid, %d FROM testUniqueUUID
> ;
> "
> declare -r NR_OF_COPIES=10
>
> declare insert=""
>
>
> function getInsertStr {
> printf "${INSERT_TEMPLATE}" "${1}"
> }
>
>
> for i in $(seq "${NR_OF_COPIES}") ; do
> insert+="$(getInsertStr ${i})
> "
> done
>
> sqlite3 checkUUID.sqlite < .echo ON
> .timer ON
> DROP TABLE IF EXISTS testUniqueUUIDBig;
> CREATE TABLE testUniqueUUIDBig (
> UUID blob,
> count int,
>
> PRIMARY KEY(UUID, count)
> CHECK(TYPEOF(UUID) = 'blob'   AND
>   LENGTH(UUID) = 16   AND
>   SUBSTR(HEX(UUID), 13, 1) == '4' AND
>   SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
> )
> );
> ${insert}
> EOT
>
>
> The logging:
> .timer ON
> DROP TABLE IF EXISTS testUniqueUUIDBig;
> Run Time: real 293.257 user 6.708000 sys 28.844000
> CREATE TABLE testUniqueUUIDBig (
> UUID blob,
> count int,
>
> PRIMARY KEY(UUID, count)
> CHECK(TYPEOF(UUID) = 'blob'   AND
>   LENGTH(UUID) = 16   AND
>   SUBSTR(HEX(UUID), 13, 1) == '4' AND
>   SUBSTR(HEX(UUID), 17, 1) IN ('8', '9', 'A', 'B')
> )
> );
> Run Time: real 0.277 user 0.00 sys 0.00
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 1 FROM testUniqueUUID
> ;
> Run Time: real 89.930 user 48.872000 sys 28.196000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 2 FROM testUniqueUUID
> ;
> Run Time: real 133.674 user 56.416000 sys 43.032000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 3 FROM testUniqueUUID
> ;
> Run Time: real 269.029 user 59.52 sys 48.84
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 4 FROM testUniqueUUID
> ;
> Run Time: real 356.622 user 61.196000 sys 51.956000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 5 FROM testUniqueUUID
> ;
> Run Time: real 398.048 user 61.924000 sys 57.54
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 6 FROM testUniqueUUID
> ;
> Run Time: real 413.252 user 61.684000 sys 59.816000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 7 FROM testUniqueUUID
> ;
> Run Time: real 464.911 user 61.672000 sys 63.20
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 8 FROM testUniqueUUID
> ;
> Run Time: real 545.974 user 61.90 sys 66.916000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 9 FROM testUniqueUUID
> ;
> Run Time: real 695.315 user 64.016000 sys 69.692000
> INSERT INTO testUniqueUUIDBig
> SELECT uuid, 10 FROM testUniqueUUID
> ;
> Run Time: real 1129.854 user 64.428000 sys 76.704000
>
>
> performanceTest.sh:
> #/usr/bin/env bash
>
> # An error should terminate the script
> # An unset variable is also an error
> set -o errexit
> set -o nounset
>
>
> declare -r DB=checkUUIDSmall.sqlite
> declare -r DEINIT=".timer OFF
> .echo OFF"
> declare -r INIT=".echo ON
> .timer ON"
> declare -r TABLE=testUniqueUUID
> declare -r TABLE_BIG=testUniqueUUIDBig
>
> declare -r DELETE_AND_DROP="DELETE FROM ${TABLE};
> DROP TABLE ${TABLE};
> DELETE FROM ${TABLE_BIG};
> DROP TABLE ${TABLE_BIG};"
> declare -r DROP="DROP TABLE ${TABLE};
> DROP TABLE ${TABLE_BIG};"
>
>
> function cpDB {
> giveMessage "Copying database"
> rm --force "${DB}"-journal
> cp "${DB}".bck "${DB}"
> }
>
> function giveMessage {
> printf "%s: %s\n" "$(date +%T)" "${1}"
> }
>
>
> cpDB
> giveMessage "DROP only"
> sqlite3 "${DB}" < ${INIT}
> ${DROP}
> ${DEINIT}
> EOT
>
> cpDB
> giveMessage "DELETE and DROP"
> sqlite3 "${DB}" < ${INIT}
> ${DELETE_AND_DROP}
> ${DEINIT}
> EOT
>
> cpDB
> giveMessage "SECURE_DELETE=0; DROP only"
> sqlite3 "${DB}" < ${INIT}
> 

[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Richard Hipp
On 4/22/16, Martin Trnovec  wrote:
> It seems that the problem is/was that the application can't create
> transient files in the "." directory returned by
> os_unix.c::unixTempFileDir due to Android os file restrictions.

Does the fix at https://www.sqlite.org/src/info/67985761aa93fb61 help?

> It also
> looks like that this function is not able to correctly check if the
> folder is writable for transient files , there are some check but all
> off them passed. Setting the correct temp directory helped but
>
> Is there a preffered way how to set a user temp director for sqlite ?
>
> I saw some chances via env variable SQLITE_TMPDIR but not sure if that's
> the best practise ?
>
> kr
> Martin
>
> D?a 22.04.2016 o 13:09 Richard Hipp nap?sal(a):
>> On 4/22/16, Martin Trnovec  wrote:
>>> Hello,
>>>
>>> we are using sqlite 3.12.1 on Android device and we are tring to copy
>>> content of the one table into another table using
>>>
>>> INSERT OR REPLACE INTO  SELECT * FROM 
>>>
>>> This commad will fail on Android with error code SQLITE_CANTOPEN when
>>> -  has any triggers configured (also empty one like
>>> "select 1" )
>>> - and the amount of row to be inserted is larger (currently it fails
>>> if it's 9226 rows but not with 9225 rows)
>> My guess: unable to open a transient file to store the statement journal.
>>
>> Have you tried activating the error and warning log for more details:
>> https://www.sqlite.org/errlog.html
>>
>>> the same code works perfectly on Windows or Mac OS o iOS, when we
>>> dropped the trigger it works also perfectly on Android ?
>>>
>>> Any hints ?
>>>
>>> Martin
>>>
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users at mailinglists.sqlite.org
>>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>
>
> --
> ---
> Mgr Martin Trnovec
> Head of development
>
> Pipelinersales Inc.
> R?ntgenova 26, 851 01 Bratislava, Slovakia
>
> @: martin.trnovec at pipelinersales.com | www.pipelinersales.com
> ---
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] BUG?

2016-04-22 Thread Stephan Beal
On Fri, Apr 22, 2016 at 9:18 AM, Rowan Worth  wrote:

> On 22 April 2016 at 14:54, Stephan Beal  wrote:
> > but i beg to differ that that works in 100% of cases.
> >
>
> Lets see, for a 64-bit float we have 53 bits of significand. The number
> ...of ambiguity. Pretty sure your conversion issues are not the fault of
> the
> julian day format :)
>

that may be and i can't prove otherwise. i'm gonna drop the topic here
because i'm limited to 1-hand typing and this is getting physically painful
:/.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] No datasize field - why?

2016-04-22 Thread Igor Korot
Hi, ALL,

[code]
SQLite version 3.9.2 2015-11-02 18:31:45
Enter ".help" for usage hints.
sqlite> PRAGMA table_info(league);
sqlite> PRAGMA table_info(leagues);
0|id|integer|0||1
1|name|varchar(100)|0||0
2|drafttype|integer(1)|0||0
3|scoringtype|integer(1)|0||0
4|roundvalues|integer(1)|0||0
5|leaguetype|char(5)|0||0
6|salary|integer|0||0
7|benchplayers|integer(1)|0||0
sqlite>
[/code]

The field type is set to be "varchar(100)" for the name field.
Wouldn't it be more logical to have it as "varchar" and have another field
for data size?

Thank you.

P.S.: I don't know how most databases interpret this, I know ODBC does it
this way


[sqlite] BUG?

2016-04-22 Thread Clemens Ladisch
Stephan Beal wrote:
> On Fri, Apr 22, 2016 at 8:43 AM, Clemens Ladisch  
> wrote:
>> You can get problems only if
>> - you are not using enough precision, or
>> - the number does not represent a full second, but some random point
>>   somewhere in the middle between two whole seconds.
>
> The latter is exactly the case here - fossil remembers the millisecond part
> during commits

Then you need to use enough precision for milliseconds:

  sqlite> select julianday('2000-01-01 00:00:00.001') - julianday('2000-01-01 
00:00:00.000');
  1.16415321826935e-08

I.e., one more fractional digit than SQLite would use by default.

SQLite's C API returns floats as binary 64-bit numbers, so the problem
is with how you format them.  You'd have to use printf("%.9f"), either
in SQL or in your code.


Regards,
Clemens


[sqlite] BUG?

2016-04-22 Thread Stephan Beal
On Fri, Apr 22, 2016 at 8:43 AM, Clemens Ladisch  wrote:

> Stephan Beal wrote:
> > On Thu, Apr 21, 2016 at 4:12 PM, jrhgame  wrote:
> >> SELECT julianday('2016-04-15 12:10:10')  ==>2457494.00706
> >> SELECT datetime(2457494.00706)   ==>2016-04-15 12:10:09
> >
> > fwiw, i've done lots and lots of testing with round-trip conversions
> > between those two formats, and it cannot be done 100% reliably (at least
> on
> > consumer-grade hardware). There is always a minority percentage of cases
> > which round/truncate one second here or there.
>
> With enough precision, seconds can be handled just fine:
>

That's the operative term resp. limitation, yes.


>  sqlite> select julianday('2000-01-01 00:00:00');
> ...> select julianday('2000-01-01 00:00:01');
> ...> select julianday('2000-01-01 00:00:02');
>  2451544.5
>  2451544.50001157
>  2451544.50002315
>

but i beg to differ that that works in 100% of cases. My test inputs read
Julian timestamps from Fossil source repos and convert them back and forth
between ISO8601 using published algorithms for doing so (as opposed to
using sqlite for doing so). In such cases, a small minority (roughly
0.5-3%, but that's heavily platform-dependent (32-bit ARM performs more
poorly here)) exhibit 1-second truncation/rounding errors.


> You can get problems only if
> - you are not using enough precision, or
> - the number does not represent a full second, but some random point
>   somewhere in the middle between two whole seconds.
>

The latter is exactly the case here - fossil remembers the millisecond part
during commits, as sown on the "D" line here:

http://fossil-scm.org/index.html/artifact/4abf607937fac8e0

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] BUG?

2016-04-22 Thread Clemens Ladisch
Stephan Beal wrote:
> On Thu, Apr 21, 2016 at 4:12 PM, jrhgame  wrote:
>> SELECT julianday('2016-04-15 12:10:10')  ==>2457494.00706
>> SELECT datetime(2457494.00706)   ==>2016-04-15 12:10:09
>
> fwiw, i've done lots and lots of testing with round-trip conversions
> between those two formats, and it cannot be done 100% reliably (at least on
> consumer-grade hardware). There is always a minority percentage of cases
> which round/truncate one second here or there.

With enough precision, seconds can be handled just fine:

 sqlite> select julianday('2000-01-01 00:00:00');
...> select julianday('2000-01-01 00:00:01');
...> select julianday('2000-01-01 00:00:02');
 2451544.5
 2451544.50001157
 2451544.50002315

The sqlite3 shell uses eight fractional digits; this happens to be more
than enough to ensure that the numbers cannot be rounded the wrong way.
(In any case, floats are stored as 64-bit binary.)

You can get problems only if
- you are not using enough precision, or
- the number does not represent a full second, but some random point
  somewhere in the middle between two whole seconds.


Regards,
Clemens


[sqlite] BUG?

2016-04-22 Thread Stephan Beal
On Thu, Apr 21, 2016 at 4:12 PM, jrhgame  wrote:

> SELECT julianday('2016-04-15 12:10:10')  ==>2457494.00706
> SELECT datetime(2457494.00706)   ==>2016-04-15 12:10:09
>

fwiw, i've done lots and lots of testing with round-trip conversions
between those two formats, and it cannot be done 100% reliably (at least on
consumer-grade hardware). There is always a minority percentage of cases
which round/truncate one second here or there. It's particularly ugly when
it happens on a day boundary. You may see that the percentage of
occurrences is higher on ARM platforms, compared to i64.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf


[sqlite] SQLITE_CANTOPEN on Android

2016-04-22 Thread Richard Hipp
On 4/22/16, Martin Trnovec  wrote:
> Hello,
>
> we are using sqlite 3.12.1 on Android device and we are tring to copy
> content of the one table into another table using
>
> INSERT OR REPLACE INTO  SELECT * FROM 
>
> This commad will fail on Android with error code SQLITE_CANTOPEN when
>-  has any triggers configured (also empty one like
> "select 1" )
>- and the amount of row to be inserted is larger (currently it fails
> if it's 9226 rows but not with 9225 rows)

My guess: unable to open a transient file to store the statement journal.

Have you tried activating the error and warning log for more details:
https://www.sqlite.org/errlog.html

>
> the same code works perfectly on Windows or Mac OS o iOS, when we
> dropped the trigger it works also perfectly on Android ?
>
> Any hints ?
>
> Martin
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] BUG?

2016-04-22 Thread Keith Medcalf
On Friday, 22 April, 2016 02:24. Stephan Beal  said:

> On Thu, Apr 21, 2016 at 4:12 PM, jrhgame  wrote:

> > SELECT julianday('2016-04-15 12:10:10')  ==>2457494.00706
> > SELECT datetime(2457494.00706)   ==>2016-04-15 12:10:09

> fwiw, i've done lots and lots of testing with round-trip conversions
> between those two formats, and it cannot be done 100% reliably (at least
> on
> consumer-grade hardware). There is always a minority percentage of cases
> which round/truncate one second here or there. It's particularly ugly when
> it happens on a day boundary. You may see that the percentage of
> occurrences is higher on ARM platforms, compared to i64.

Actually on anything that uses IEEE-754 Double Precision floating point, 1 ULP 
for the current date is about 41 microseconds.  So round trips to and from 
julianday are entirely reliable, even on consumer grade hardware.

Around 400 AD 1 ULP was 23 microseconds.  In the year 9000 1 ULP will be about 
92 microseconds.

Seems pretty reliable (and accurate) to me.  It is just hoomanz being imprecise 
that are causing you difficulties (or perhaps premature optimization -- in this 
case rounding) -- it is not the computer systems.

Of course, single precision floating point will be much less accurate.  But who 
uses that for anything?






[sqlite] Multiple in-memory database table query

2016-04-22 Thread Dan Kennedy
>   Generally speaking, no matter how they are created, a given in-memory 
> database has one and only one connection.  You cannot, for example, use a URI 
> ?filename? with mode=memory to open the same in-memory database more than 
> once (I assume that?s what you mean by ?by name??).  For example:
>
>
> $ ./sqlite3
> SQLite version 3.8.4.2 2014-03-26 18:51:19
> Enter ".help" for usage hints.
> Connected to a transient in-memory database.
> Use ".open FILENAME" to reopen on a persistent database.
> sqlite> ATTACH DATABASE "file:data.db?mode=memory" AS db1;
> sqlite> ATTACH DATABASE "file:data.db?mode=memory" AS db2;
> sqlite> ATTACH DATABASE "file:data.db?mode=memory" AS db3;
> sqlite> .databases
> seq  name file
> ---  ---  
> --
> 0main
> 2db1
> 3db2
> 4db3
> sqlite> CREATE TABLE main.t_main ( c );
> sqlite> CREATE TABLE db1.t_db1( c );
> sqlite> CREATE TABLE db2.t_db2( c );
> sqlite> select * from main.sqlite_master;
> table|t_main|t_main|2|CREATE TABLE t_main ( c )
> sqlite> select * from db1.sqlite_master;
> table|t_db1|t_db1|2|CREATE TABLE t_db1( c )
> sqlite> select * from db2.sqlite_master;
> table|t_db2|t_db2|2|CREATE TABLE t_db2( c )
> sqlite> select * from db3.sqlite_master;
> sqlite>
>
> You can see that even though I?ve opened the same ?file:data.db?mode=memory? 
> database more than once, it is actually three distinct databases.  I?m pretty 
> sure that when mode=memory, the path/filename are ignored.

If you open the databases using "file:data.db?mode=memory=shared" 
then db1, db2 and db3 will all refer to the same in-memory database. 
Which will fail, as you cannot attach the same db to a single handle 
more than once. But, if your process has two separate database handles 
and they both attach "file:data.db?mode=memory=shared" then both 
handles will be connected to the same in-memory database.

Dan.