[sqlite] SQLITE_CANTOPEN on Android

2016-04-24 Thread Martin Trnovec
D?a 24.04.2016 o 8:39 Clemens Ladisch nap?sal(a):
> Richard Hipp wrote:
>> On 4/22/16, Christian Werner  wrote:
>>> On 04/22/2016 03:46 PM, Richard Hipp wrote:
 Why isn't /var/tmp or /tmp usable on Android?
>>> There ain't no "/var/tmp" nor "/tmp" on droids. Best of all worst 
>>> alternatives
>>> is to use the application's own directory or better the subdir "cache" 
>>> therein.
>> Is there a well-defined way to find the name of the application's own 
>> directory?
> In Java, there is Context.getCacheDir().
>
> It is available from C only if you have a reference to some Android
> object (Activity or Context), and do the dance of accessing the Java
> stuff:
> http://stackoverflow.com/questions/7595324/creating-temporary-files-in-android-with-ndk
>
> It would be possible for the Java code to set sqlite3_temp_directory,
> but the Android framework does not do it, and it would probably not be
> a good idea to require every app to do it.
>
>
> Android (and Chromium on Android) just use SQLITE_TEMP_STORE=3 for this
> reason:
> https://android.googlesource.com/platform/external/sqlite/+/master/dist/Android.mk
> https://bugs.chromium.org/p/chromium/issues/detail?id=138128
> https://codereview.chromium.org/10809015
>
> Apparently, Martin's SQLite library was compiled differently.
That's right we have our own build of sqlite included in c++ common 
library for all platforms, that's why we are facing this issue. We 
solved it by changing temp dir into application directory containg other 
files as well, but now i'am considering using memory 
(SQLITE_TEMP_STORE=3).  But there is a big but as we are using long and 
heavy transactions with long statments (INSERT OR REPLACE ... SELECT ... 
FROM ... )and we don't want that device is running out of memory so is 
the lenght of transaction somehow influencing size of temp file ?

kr
Martin
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
---
Mgr Martin Trnovec
Head of development

Pipelinersales Inc.
R?ntgenova 26, 851 01 Bratislava, Slovakia

@: martin.trnovec at pipelinersales.com | www.pipelinersales.com
---



[sqlite] Is this a regression?

2016-04-24 Thread Richard Hipp
On 4/24/16, Richard Hipp  wrote:
>
> The problem is apparently caused by the optimization added by check-in
> https://www.sqlite.org/src/info/6df18e949d367629 on 2015-06-02 and
> first released in version 3.8.11 on 2015-07-27.
>

Ticket: https://www.sqlite.org/src/info/f7f8c97e97597
Fix: https://www.sqlite.org/src/info/ec215f94ac9748c0
-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Is this a regression?

2016-04-24 Thread Richard Hipp
On 4/24/16, Jean-Luc Hainaut  wrote:
> select PID,TOTALQ
> from (select PID, sum(QTY) as TOTALQ
>   from   D
>   group by PID
> union
>   select PID, 0 as TOTALQ
>   from   P
>   where  PID not in (select PID from D)
>   )
> where TOTALQ < 10
> order by PID;
>
> With SQLite 3.10 and 3.12.2 the query fails with the message:
>
>"Error: misuse of agregate: sum()"
>

Your work-around until the problem is fixed (probably in SQLite
version 3.13.0) is to put the aggregate part of the compound subquery
last instead of first; like this:

select PID,TOTALQ
from (select PID, 0 as TOTALQ
  from   P
  where  PID not in (select PID from D)
union
  select PID, sum(QTY) as TOTALQ
  from   D
  group by PID
  )
where TOTALQ < 10
order by PID;

The new optimization that leads to this problem is suppose to be
disabled if the subquery is an aggregate.
(https://www.sqlite.org/src/artifact/30217121bd?ln=3759-3763).
However, it appears that the test for whether or not the subquery is
an aggregate (https://www.sqlite.org/src/artifact/30217121bd?ln=3789)
is only looking at the last SELECT in the compound query.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] Is this a regression?

2016-04-24 Thread Richard Hipp
On 4/24/16, Jean-Luc Hainaut  wrote:
>
> When executing a set of queries written some years ago (let's call it a
> "regression test"!), I found that one of them now fails with a strange
> message.
> It executes correctly until version 3.8.5 (perhaps later) but fails from
> version 3.10 (perhaps earlier).
>
>
> Has anybody observed this problem?
>

As far as we know, you are the first to observer this problem.

The problem is apparently caused by the optimization added by check-in
https://www.sqlite.org/src/info/6df18e949d367629 on 2015-06-02 and
first released in version 3.8.11 on 2015-07-27.

-- 
D. Richard Hipp
drh at sqlite.org


[sqlite] No datasize field - why?

2016-04-24 Thread James K. Lowden
On Sun, 24 Apr 2016 08:51:09 -0400
Carlos  wrote:

> But, with very fast CPUs and RAM memory buffers for the directory 
> entries in the disks, the variable length records would probably
> result in gain for much less I/O for the data.




[sqlite] No datasize field - why?

2016-04-24 Thread James K. Lowden
On Sun, 24 Apr 2016 14:09:50 +0100
Simon Slavin  wrote:

> 
> On 24 Apr 2016, at 1:51pm, Carlos  wrote:
> 
> > But, with very fast CPUs and RAM memory buffers for the directory
> > entries in the disks, the variable length records would probably
> > result in gain for much less I/O for the data.
> 
> Agreed.  Which is one reason why fixed-length string columns are less
> important and less used now.  When the bottleneck is the speed of the
> backing store, storing fewer characters can mean the difference
> between having to write one sector or two.

You still have only two choices: compute or seek.  The physical
structure is either like an array, and you can compute the record's
location, or it's like a list, and you have to iterate.  

> Another reason is that we use Unicode not ASCII/SIXBIT/EBCDIC, and in
> Unicode different characters take different numbers of bytes.  So
> even if you're storing a fixed number of bytes the convenience of
> always knowing exactly how many characters to display no longer
> exists.

These are different concerns, and they don't really pose any
difficulty.  Given an encoding, a column of N characters can take up to 
x * N bytes.  Back in the day, "x" was 1.  Now it's something else.  No
big deal.  

Note that SQL still defines lengths in terms of characters.  It's up
the DBMS how to store them (regardless of the agreed-on encoding).  

--jkl


[sqlite] No datasize field - why?

2016-04-24 Thread James K. Lowden
On Sat, 23 Apr 2016 19:22:04 -0600
Scott Robison  wrote:

> So if you could make your table up of integers, floats, and text
> with character limits on them you could get fixed-length rows, which
> might reduce your access time by 60% or more.  Such a decrease in
> access time could mean the difference between being able to update a
> database live or being able to update only during an overnight run.

As I tried to make clear in my reply to Keith, efficiency concerns are
beside the point.  The theory underpinning SQL rests on predicate logic
and set theory.  Freeing the user from concerns of physical storage and
addressing were objectives Codd cited in his first paper.  

In point of fact, unbounded-length records have been supported for
decades.  Sybase called them TEXT and IMAGE types.  You could not
search them, though, only retrieve them once the row had been located
by other means.  Why?  Just as you posit: for efficiency.  The
physical row kept a "pointer" to the TEXT data and, yes, every
physical row had the same length, for efficiency reasons.  

--jkl



[sqlite] No datasize field - why?

2016-04-24 Thread Scott Robison
On Apr 24, 2016 6:42 PM, "James K. Lowden"  wrote:
>
> On Sat, 23 Apr 2016 19:22:04 -0600
> Scott Robison  wrote:
>
> > So if you could make your table up of integers, floats, and text
> > ...
>
> As I tried to make clear in my reply to Keith, efficiency concerns ar
> ...

1. I think the quote attributed me was Simon.

2. Regardless, there are many reasons why things are done. SQL may not
define implementation details that promote efficient access, but everyone
wants it and considers it important.


[sqlite] No datasize field - why?

2016-04-24 Thread James K. Lowden
On Sat, 23 Apr 2016 14:50:45 -0400
"Keith Medcalf"  wrote:
> > On Sat, 23 Apr 2016 08:56:14 -0400
> > "Keith Medcalf"  wrote:
> > 
> > > Those things that those other DBMSes do are holdovers to maintain
> > > backwards compatibility with the good old days when dinosaurs
> > > ruled the earth
>  
> > As amusing as your rant is, it's not accurate.  Treating columns as
> > types is a "holdover" from mathematics and logic.  It has nothing
> > to do with maintaining backwards compatibility, or the cost of
> > RAM.  
> 
> The specification of "Length/precision" as well as the use of
> "varchar", "varbinary" and the like are holdovers from the days when
> files had to have fixed record lengths so BDAM would work.  

They might have originated in that way on System R.  Ingres was
developed contemporaneously on Unix, which then and now had no
record-based file types.  As I pointed out, 

> > Many new DBMS engines have been written in recent years
> > (notably columnar stores) and, of those that support SQL, none
> > abandoned strict column types.

> (contrary to common mis-belief, varchar(100) does not mean a
> "variable length character field with a length up to 100 characters",

SQL does *not* define implementation.  It defines semantics: user
provides X, system replies Y.  It makes no statement about how a column
is stored.  You know that, surely.  Why paint yourself into a corner
with an argument you know is lost before it begins?  

I guess I should remind you that length-limited character strings have
lots of utilty irrespective of storage concerns.  Some strings *are*
fixed length, for example cusip, ssn, drivers licence, employee id,
phone number.  Length-checking is a simple aspect of validation.  

There are also more banal concerns about external representation.   An
address might be limited to 60 characters so that it fits in the
billing envelope window.  Maybe 60 characters is arbitrary, but we both
know that 6000 characters will be too many.  Length limits help keep
the ruby on the rails.  

> In order for Codd and Date to conceive of a world wherein duck-typing
> existed, a duck-typed language would have to exist first.  Since such
> was inconceivable before its invention, it is no wonder that it never
> occurred to anyone that a database column could be ducky too.

You are seriously underestimating them.  You're misinterpreting the
meaning and purpose of column types.  Your rant about VSAM is, as 
physicists sometimes say, not even wrong.  

If the theory seems abstruse, you also utterly ignore observed
detrimental effects of the lack of type enforcement, namely the
complexity that arises at SELECT time, when the application has to cope
with whatever detritus got parked in the database.  On a SQLite scale,
many times that's not a problem because writers are highly
constrained.  But in general it's a central concern, and was one of the
motivations for the invention of the relational model.  

--jkl


[sqlite] Use System.Data.Sqlite in Mono on a ARM based Panel

2016-04-24 Thread Joe Mistachkin

Jochen Kuehner wrote:
>
> Wich is the right one? (Also
sqlite-netFx451-binary-Mono-2013-1.0.101.0.zip
> from Homepage is looking for interop dlls)
> 

That is the right one.  The interop DLL is being used because it has special
options to support managed virtual table implementations, etc.  To compile
an
interop DLL for Mono, please grab the source code, extract it, and then use
the following script (or something like it):

http://urn.to/r/kJ

--
Joe Mistachkin



[sqlite] Is this a regression?

2016-04-24 Thread Jean-Luc Hainaut
Hello,

When executing a set of queries written some years ago (let's call it a 
"regression test"!), I found that one of them now fails with a strange message.
It executes correctly until version 3.8.5 (perhaps later) but fails from 
version 3.10 (perhaps earlier).

I have simplified the problem as follow:

- table P(PID) represents products,
- table D(PID,QTY) represents order details (PID identifies a product
  and QTY specifies the quantity ordered of this product). 

In SQL:

create table P(PID integer not null primary key);
create table D(PID integer not null references P,
   QTY integer not null);
insert into P values (1),(2),(3),(4);
insert into D values (1,5),(1,10),(3,6),(3,2),(4,12);

The following query computes, for each product, the sum of quantities ordered. 
It also includes quantity 0 for products not referenced by D:

select PID,TOTALQ
from (select PID, sum(QTY) as TOTALQ
  from   D
  group by PID
union
  select PID, 0 as TOTALQ
  from   P
  where  PID not in (select PID from D)
  )
order by PID;

As expected, it provides, through the SQLite3 shell:

1|15
2|0
3|8
4|12

The problem arises when we add a "where" clause involving computed column 
TOTALQ:

select PID,TOTALQ
from (select PID, sum(QTY) as TOTALQ
  from   D
  group by PID
union
  select PID, 0 as TOTALQ
  from   P
  where  PID not in (select PID from D)
  )
where TOTALQ < 10
order by PID;

With SQLite 3.10 and 3.12.2 the query fails with the message:

   "Error: misuse of agregate: sum()"

while with SQLite 3.8.5, it provides the correct answer:

2|0
3|8

Rewriting the "from" clause as a "with" query or creating a view (with and 
without the problematic "where" clause) then querying show the same behaviour.
It also appears that removing the second argument of the union "solves" the 
problem.

Has anybody observed this problem?

Thanks for future help

Jean-Luc Hainaut


Prof. Jean-Luc Hainaut
Facult? d'Informatique
University of Namur
Rue Grandgagnage, 21
B-5000 - Namur (Belgium)   
E-mail : jlhainaut at info.fundp.ac.be, jean-luc.hainaut at unamur.be
http://www.info.fundp.ac.be/libd


[sqlite] No datasize field - why?

2016-04-24 Thread Simon Slavin

On 24 Apr 2016, at 1:51pm, Carlos  wrote:

> But, with very fast CPUs and RAM memory buffers for the directory entries in 
> the disks,
> the variable length records would probably result in gain for much less I/O 
> for the data.

Agreed.  Which is one reason why fixed-length string columns are less important 
and less used now.  When the bottleneck is the speed of the backing store, 
storing fewer characters can mean the difference between having to write one 
sector or two.

Another reason is that we use Unicode not ASCII/SIXBIT/EBCDIC, and in Unicode 
different characters take different numbers of bytes.  So even if you're 
storing a fixed number of bytes the convenience of always knowing exactly how 
many characters to display no longer exists.

Simon.


[sqlite] Build failed with message "No rule to make target `lemon'"

2016-04-24 Thread Daisuke Makiuchi
Hello,

When I built with mingw on sources in sqlite-src-3120200.zip,
I got this error:

make: *** No rule to make target `lemon', needed by `fts5parse.c'.  Stop.


The correct target name is `lemon.exe' in this case.
This is a patch for this issue:

--- Makefile.in~2016-04-24 12:02:26.044853355 +0900
+++ Makefile.in 2016-04-24 12:03:09.546779485 +0900
@@ -1019,10 +1019,10 @@
$(TOP)/ext/fts5/fts5_varint.c \
$(TOP)/ext/fts5/fts5_vocab.c  \

-fts5parse.c:   $(TOP)/ext/fts5/fts5parse.y lemon
+fts5parse.c:   $(TOP)/ext/fts5/fts5parse.y lemon$(BEXE)
cp $(TOP)/ext/fts5/fts5parse.y .
rm -f fts5parse.h
-   ./lemon $(OPTS) fts5parse.y
+   ./lemon$(BEXE) $(OPTS) fts5parse.y

 fts5parse.h: fts5parse.c


Thanks,

Daisuke Makiuchi


[sqlite] Use System.Data.Sqlite in Mono on a ARM based Panel

2016-04-24 Thread Jochen Kuehner
I've a Project, wich should run on Windows and a Mono ARM based Panel.

On Windows I use System.Data.Sqlite (Core) nuget! Now on Mono I want to also 
use System.Data.Sqlite, because Monos sqlite Part has some Bugs! But fro mono I 
need taht it looks for sqlite3.so and not the Interop Dll. Wich Nuget Package 
can I use for this? If I include System.Data.Sqlite (MSIL) it also searches for 
the Interop DLL and not the sqlite3.so! Wich is the right one? (Also 
sqlite-netFx451-binary-Mono-2013-1.0.101.0.zip from Homepage is looking for 
interop dlls)



MLOG Logistics GmbH, Sitz: D-74196 Neuenstadt
Registergericht: Amtsgericht Stuttgart HRB 100594
Gesch?ftsf?hrer: Hans-J?rgen Heitzer, Jochen Strau?


[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-24 Thread Domingo Alvarez Duarte

Hello !
I posted this some time ago, it's a program to test and tune sqlite with 
concurrency:
https://gist.github.com/mingodad/79225c88f8dce0f174f5

Maybe it can be util to you and if you !

Cheers !


[sqlite] In-Memory database PRAGMA read_uncommitted

2016-04-24 Thread Michele Pradella
Il 2016-04-23 11:05 R Smith ha scritto:

> On 2016/04/23 10:20 AM, Michele Pradella wrote: 
> 
>> I have an In-Memory DB that is written and read from connections of the
>> same process. All good with shared cache, but I found that TableLock
>> occur more often on In-Memory than on disk DB, probably because in
>> memory we can't use WAL.
>> 
>> Anyway I found the PRAGMA read_uncommitted that from documentation seams
>> a way to read without the problem of table lock. The question is about
>> this sentence "This can lead to inconsistent query results if another
>> database connection modifies a table while it is being read".
>> "inconsistent" means just "out of date"? or there can be some other type
>> of inconsistent data?
> 
> It means that you can read a record set, using such a shared cache 
> connection, while a sibling connection (with whom you are sharing) is 
> altering the data, resulting in the possibility that the record set will be 
> inconsistent with both the pre-change and the post-change DB states. To draw 
> a picture, imagine the following scenario:
> 
> Create connections C1 and C2 which shares the cache and at least C2 uses 
> pragma read_uncomitted.
> 
> The following table "t" exists so that:
> ID | Val
> ---|
> 1 | 10
> 2 | 10
> 3 | 10
> 
> Connection C1 starts updating the DB with:
> UPDATE t SET Val = 20;
> 
> At close after that same moment, C2 starts reading (uncommitted, i.e. 
> non-serialized) the DB with:
> SELECT * FROM t;
> 
> But reading is faster than writing, so the result set might look like this 
> perhaps:
> ID | Val
> ---|
> 1 | 20
> 2 | 20
> 3 | 10
> 
> which is not consistent with either the DB state before C1 writes, nor after 
> C1 committed.
> 
> So no, "inconsistent" doesn't "just" mean outdated, it truly means 
> non-consistent. This may or may not be a problem to your scenario.
> 
> Perhaps the timeout setting is of more value to you? I do not have experience 
> of in-memory DBs that gets used to the point where table locks become 
> intrusive - but perhaps someone else here have solved the problem and can 
> shed some light.
> 
> Cheers,
> Ryan
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Ok understood thank you. I'll have a look to the timeout settings just
to check it, but in my environment this kind of "inconsistency" it's not
a problem. 

Anyway I think that shared cache in Memory DB give you the ability to
make sqlite realy very fast in SELECT statement, very good feature.




[sqlite] No datasize field - why?

2016-04-24 Thread Carlos
But, with very fast CPUs and RAM memory buffers for the directory 
entries in the disks,
  the variable length records would probably result in gain for much 
less I/O for the data.

On 23/04/2016 21:22, Scott Robison wrote:
> On Apr 23, 2016 6:21 PM, "Simon Slavin"  wrote:
>>
>> On 24 Apr 2016, at 12:58am, Scott Robison  wrote:
>>
>>> For any SQL datastore, the way the data is stored is completely an
>>> implementation detail. The SQL engine would be free to serialize all
> values
>>> to text and store them in 4K pages if it wanted to, then deserialize
> them
>>> on the way back. I certainly don't know of any that do that, but the
>>> impetus for the creation of VARCHAR fields (I imagine) was specifically
> to
>>> avoid storing padding for data that did not require it.
>> Speed plays a part as well as storage space.  Back in the days of
> mainframes and minicomputers, it was far more efficient to store
> fixed-length records than variable-length records.  To look up row 7463 in
> a file you would just multiply 7463 by the number of bytes in a row, then
> start reading from that byte.  Think about how much more processing and
> access SQLite has to do just to read a row from a database file.
>> So if you could make your table up of integers, floats, and text with
> character limits on them you could get fixed-length rows, which might
> reduce your access time by 60% or more.  Such a decrease in access time
> could mean the difference between being able to update a database live or
> being able to update only during an overnight run.
>
> This was particularly true in the case of media like 9 track mag tape
> (effective 1 dimensional access) vs modern hard drive (effective 3
> dimensional access).
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



[sqlite] SQLITE_CANTOPEN on Android

2016-04-24 Thread Clemens Ladisch
Richard Hipp wrote:
> On 4/22/16, Christian Werner  wrote:
>> On 04/22/2016 03:46 PM, Richard Hipp wrote:
>>> Why isn't /var/tmp or /tmp usable on Android?
>>
>> There ain't no "/var/tmp" nor "/tmp" on droids. Best of all worst 
>> alternatives
>> is to use the application's own directory or better the subdir "cache" 
>> therein.
>
> Is there a well-defined way to find the name of the application's own 
> directory?

In Java, there is Context.getCacheDir().

It is available from C only if you have a reference to some Android
object (Activity or Context), and do the dance of accessing the Java
stuff:
http://stackoverflow.com/questions/7595324/creating-temporary-files-in-android-with-ndk

It would be possible for the Java code to set sqlite3_temp_directory,
but the Android framework does not do it, and it would probably not be
a good idea to require every app to do it.


Android (and Chromium on Android) just use SQLITE_TEMP_STORE=3 for this
reason:
https://android.googlesource.com/platform/external/sqlite/+/master/dist/Android.mk
https://bugs.chromium.org/p/chromium/issues/detail?id=138128
https://codereview.chromium.org/10809015

Apparently, Martin's SQLite library was compiled differently.


Regards,
Clemens


[sqlite] No datasize field - why?

2016-04-24 Thread Simon Slavin

On 24 Apr 2016, at 12:58am, Scott Robison  wrote:

> For any SQL datastore, the way the data is stored is completely an
> implementation detail. The SQL engine would be free to serialize all values
> to text and store them in 4K pages if it wanted to, then deserialize them
> on the way back. I certainly don't know of any that do that, but the
> impetus for the creation of VARCHAR fields (I imagine) was specifically to
> avoid storing padding for data that did not require it.

Speed plays a part as well as storage space.  Back in the days of mainframes 
and minicomputers, it was far more efficient to store fixed-length records than 
variable-length records.  To look up row 7463 in a file you would just multiply 
7463 by the number of bytes in a row, then start reading from that byte.  Think 
about how much more processing and access SQLite has to do just to read a row 
from a database file.

So if you could make your table up of integers, floats, and text with character 
limits on them you could get fixed-length rows, which might reduce your access 
time by 60% or more.  Such a decrease in access time could mean the difference 
between being able to update a database live or being able to update only 
during an overnight run.

Simon.