[sqlite] sqlite .import bug

2013-09-06 Thread dmitry babitsky
*This works fine:*

echo 1 | sqlite dbfile ".import '/dev/stdin' foo"

But if you have any character (like a space, or newline), or sql statement
in front of the '.import', sqlite gives:
"Error near "." syntax error

*Why it matters:*
*
*
Because what I'm really trying to do is to import data ignoring dups.
Since .import does not have this functionality, I want to create a temp
table, import into it, and then copy to the target with 'on conflict ignore'
Because  I want that table to be temporary,  I cannot use 2 separate sqlite
invokes.

*This is the goal:*
echo 1 | sqlite dbfile "
create temp table foo(c);
.import '/dev/stdin' temp.foo
insert into Foo(c) select c from temp.foo *on conflict ignore*;
"

This does not work because there's something before .import (newline alone
would have the same effect)

This does work, if executed interactively:

*$sqlite foo
SQLite version 3.7.15.2 2013-01-09 11:53:05
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create temp table foo (c);
sqlite> .import '/dev/stdin' temp.foo
1
sqlite> select * from temp.foo;
1
sqlite> .quit*
*
*
*Thank you.*
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread James K. Lowden
On Fri, 6 Sep 2013 17:45:59 -0400 (EDT)
j.merr...@enlyton.com wrote:

> I propose that you remove the unique index because SQLite does not
> handle the update case the way you want. 

The correct general approach, in light of observed behavior, 

1.  begin IMMEDIATE transaction
2.  select rows into a temporary table
3.  update temporary table
4.  delete from main table
5.  insert into main table from temporary table
6.  commit

An alternative in this case would be to expand the unique constraint
with another column to support updates.  Something like this:

create table t 
( PKey INTEGER PRIMARY KEY
, Name TEXT
, Sequence INTEGER
, _pending INTEGER default 0 check (_pending in (PKey, 0))
, unique (Name, Sequence, _pending)
);

insert into t (Name, Sequence) values ('Blue', 1);
insert into t (Name, Sequence) values ('Blue', 2);
insert into t (Name, Sequence) values ('Blue', 3);
insert into t (Name, Sequence) values ('Blue', 4);
insert into t (Name, Sequence) values ('Blue', 5);

-- insert a new 3

BEGIN TRANSACTION ;
UPDATE t
set _pending = PKey where Sequence >=3;

UPDATE t
SET Sequence = Sequence + 1 
WHERE Sequence >= 3 
AND Name = 'Blue';

UPDATE t set _pending = 0;

insert into t (Name, Sequence) values ('Blue', 3);
COMMIT;

select * from t;
PKeyNameSequence_pending  
--  --  --  --
1   Blue1   0 
2   Blue2   0 
3   Blue4   0 
4   Blue5   0 
5   Blue6   0 
6   Blue3   0 

That has the effect of voiding the unique constraint while _pending is
nonzero, but leaving it in force after the commit.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread David de Regt
Mayhaps the CROSS JOIN trick is your friend in this case, if you can be pretty 
sure of the correct direction of the join order. :)

-David

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of James K. Lowden
Sent: Friday, September 6, 2013 7:40 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Query preperation time does not scale linearly with 
growth of no. of tables

On Fri, 6 Sep 2013 17:29:25 +
Harmen de Jong - CoachR Group B.V.  wrote:

> > If I recall correctly, query planner's behavior is worst-case 
> > quadratic in the number of tables participating in the query. This 
> > includes tables mentioned directly, and also those pulled in 
> > indirectly via views, triggers or foreign keys.

Factorial, actually.  After three tables, each addtional table increases 
potential join sequences by roughly an order of magnitude.  

Given tables A, B, and C, 1 * 2 * 3 = 6: 

sqlite>  select a.T, b.T, c.T  from F a join F b on a.T  <> b.T join F c 
sqlite> on b.T <> c.T where a.T <> c.T order by a.T, b.T, c.T;
A   B   C 
A   C   B 
B   A   C 
B   C   A 
C   A   B 
C   B   A  

That's six plans for the order in which the system could choose to
access the tables to execute the query. 

Factorial grows quickly, as is demonstrated by adding table D:

sqlite> select a.T, b.T, c.T, d.T  from F a join F b on a.T <> b.T
> cross join F c on b.T <> c.T join F as d on c.T <> d.T where a.T <> 
> c.T and a.T <> d.T and b.T <> d.T order by a.T, b.T, c.T, d.T;
A   B   C   D 
A   B   D   C 
A   C   B   D 
A   C   D   B 
A   D   B   C 
A   D   C   B 
B   A   C   D 
B   A   D   C 
B   C   A   D 
B   C   D   A 
B   D   A   C 
B   D   C   A 
C   A   B   D 
C   A   D   B 
C   B   A   D 
C   B   D   A 
C   D   A   B 
C   D   B   A 
D   A   B   C 
D   A   C   B 
D   B   A   C 
D   B   C   A 
D   C   A   B 
D   C   B   A 

Pity the query optimizer facing  an 8-way join.  Or, say, a 20-table
join:

$ FACT=1; seq 20 | while read F;  do FACT=$(( ${FACT} * $F )); printf '% 3d! = 
%d\n'  $F ${FACT};  done
  1! = 1
  2! = 2
  3! = 6
  4! = 24
  5! = 120
  6! = 720
  7! = 5040
  8! = 40320
  9! = 362880
 10! = 3628800
 11! = 39916800
 12! = 479001600
 13! = 6227020800
 14! = 87178291200
 15! = 1307674368000
 16! = 20922789888000
 17! = 355687428096000
 18! = 6402373705728000
 19! = 121645100408832000
 20! = 243290200817664

There is such a thing as too many choices!  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread James K. Lowden
On Fri, 6 Sep 2013 17:29:25 +
Harmen de Jong - CoachR Group B.V.  wrote:

> > If I recall correctly, query planner's behavior is worst-case
> > quadratic in the number of tables participating in the query. This
> > includes tables mentioned directly, and also those pulled in
> > indirectly via views, triggers or foreign keys.

Factorial, actually.  After three tables, each addtional table
increases potential join sequences by roughly an order of
magnitude.  

Given tables A, B, and C, 1 * 2 * 3 = 6: 

sqlite>  select a.T, b.T, c.T  from F a join F b on a.T  <> b.T 
sqlite> join F c on b.T <> c.T where a.T <> c.T order by a.T, b.T, c.T; 
A   B   C 
A   C   B 
B   A   C 
B   C   A 
C   A   B 
C   B   A  

That's six plans for the order in which the system could choose to
access the tables to execute the query. 

Factorial grows quickly, as is demonstrated by adding table D:

sqlite> select a.T, b.T, c.T, d.T  from F a join F b on a.T <> b.T
> cross join F c on b.T <> c.T join F as d on c.T <> d.T where
> a.T <> c.T and a.T <> d.T and b.T <> d.T order by a.T, b.T,
> c.T, d.T;
A   B   C   D 
A   B   D   C 
A   C   B   D 
A   C   D   B 
A   D   B   C 
A   D   C   B 
B   A   C   D 
B   A   D   C 
B   C   A   D 
B   C   D   A 
B   D   A   C 
B   D   C   A 
C   A   B   D 
C   A   D   B 
C   B   A   D 
C   B   D   A 
C   D   A   B 
C   D   B   A 
D   A   B   C 
D   A   C   B 
D   B   A   C 
D   B   C   A 
D   C   A   B 
D   C   B   A 

Pity the query optimizer facing  an 8-way join.  Or, say, a 20-table
join:

$ FACT=1; seq 20 | while read F;  do FACT=$(( ${FACT} * $F )); printf '%
3d! = %d\n'  $F ${FACT};  done
  1! = 1
  2! = 2
  3! = 6
  4! = 24
  5! = 120
  6! = 720
  7! = 5040
  8! = 40320
  9! = 362880
 10! = 3628800
 11! = 39916800
 12! = 479001600
 13! = 6227020800
 14! = 87178291200
 15! = 1307674368000
 16! = 20922789888000
 17! = 355687428096000
 18! = 6402373705728000
 19! = 121645100408832000
 20! = 243290200817664

There is such a thing as too many choices!  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread James K. Lowden
On Fri, 6 Sep 2013 11:07:27 -0400
Richard Hipp  wrote:

> The effect of early row updates might be visible in later row updates
> if you contrive a sufficiently complex example.  But you really have
> to go out of your way to do that.  

sqlite> create table i ( i int primary key );
sqlite> insert into i values (1);
sqlite> insert into i values (2);
sqlite> update i set i = i + 1;
SQL error: column i is not unique

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread James K. Lowden
On Fri, 6 Sep 2013 07:56:53 -0500
"Marc L. Allen"  wrote:

> I don't think it's a bug.  

It is a bug as long as the behavior is in exception to the
documentation. 

> I don't believe there's any defined rule for how SQL should behave,
> is there?  

Of course there is.  Hundreds of pages describe SQL.  

> The updates are done serially not atomically.  

They're not *supposed* to be.  Updates are absolutely atomic.  Every
SQL statement is atomic.  

--jkl

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Peter Aronson
Ah, I see.  Yeah, that would be trickier.  You could save off the geometry blob 
and the GEOSPreparedGeometry object in a structure passed in to 
sqlite3_create_function and accessed via sqlite3_user_data and memcmp each 
geometry blob with the previous one to see if you need to regenerate the 
prepared geometry, but then you'd have to clean it up yourself somehow 
afterwards, and the cost of alloc, memcpy calls and memcmp calls would have to 
be cheaper than generating the prepared geometry.  

It does seem like SQLite would benefit from somewhat to declare a function as 
invarient, so if it had constant inputs, its output would be treated as a 
constant input itself when fed into another function.

Peter
- Original Message -
> From: Pepijn Van Eeckhoudt 
> To: Peter Aronson ; General Discussion of SQLite Database 
> 
> Cc: 
> Sent: Friday, September 6, 2013 4:47 PM
> Subject: Re: [sqlite] How to use aux_data effectively?
> 
> Peter,
> 
> Thanks for the suggestion but that's not the part I'm trying to optimise 
> at the moment. It could be useful to use auxdata there as well to avoid 
> reparsing the text of course. What I would really like to achieve is that the 
> GEOSPreparedGeometry can be cached to speed up the geometry calculations.
> 
> Pepijn
> 
> On 07 Sep 2013, at 00:58, Peter Aronson  wrote:
> 
>> Actually, as it turns out, you can get the result you want by having the 
> GeomFromText function use auxdata to store the geometry blob generated from 
> the 
> WKT string, since it's a constant.  Then all the GeomFromText has to do is 
> to return the Geometry blob when sqlite3_get_auxdata returns non-NULL.
>> 
>> Peter
>> 
>> - Original Message -
>>> From: Pepijn Van Eeckhoudt 
>>> To: sqlite-users@sqlite.org
>>> Cc: 
>>> Sent: Friday, September 6, 2013 8:38 AM
>>> Subject: [sqlite] How to use aux_data effectively?
>>> 
>>> Hi,
>>> 
>>> In the extension I'm developing 
> (https://bitbucket.org/luciad/libgpkg)
>>> I'm currently adding support for queries like:
>>> select Distance(
>>>   GeomFromText('Point(13.457 3)'),
>>>   geometry
>>> ) from table;
>>> 
>>> GeomFromText takes a string and outputs a geometry blob
>>> Distance takes two geometry blobs and returns a double
>>> 
>>> In order to speed up the distance function I was wondering if I could
>>> use aux_data to cache the parsed version of the first parameter since
>>> this remains the same for every call. So far I haven't been able to 
> get
>>> this to work though, aux_data is always NULL, no matter what I try.
>>> 
>>> My hunch is that this is because the first parameter is the result of a
>>> function call which could in theory return different values for each 
> row
>>> even if the input parameters are constant. Is that correct?
>>> 
>>> Are there any other ways to kind of memoize the GeomFromText function
>>> (or the parameters to distance) besides aux_data?
>>> 
>>> Thanks,
>>> 
>>> Pepijn
>>> 
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Pepijn Van Eeckhoudt
Peter,

Thanks for the suggestion but that's not the part I'm trying to optimise at the 
moment. It could be useful to use auxdata there as well to avoid reparsing the 
text of course. What I would really like to achieve is that the 
GEOSPreparedGeometry can be cached to speed up the geometry calculations.

Pepijn

On 07 Sep 2013, at 00:58, Peter Aronson  wrote:

> Actually, as it turns out, you can get the result you want by having the 
> GeomFromText function use auxdata to store the geometry blob generated from 
> the WKT string, since it's a constant.  Then all the GeomFromText has to do 
> is to return the Geometry blob when sqlite3_get_auxdata returns non-NULL.
> 
> Peter
> 
> - Original Message -
>> From: Pepijn Van Eeckhoudt 
>> To: sqlite-users@sqlite.org
>> Cc: 
>> Sent: Friday, September 6, 2013 8:38 AM
>> Subject: [sqlite] How to use aux_data effectively?
>> 
>> Hi,
>> 
>> In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
>> I'm currently adding support for queries like:
>> select Distance(
>>   GeomFromText('Point(13.457 3)'),
>>   geometry
>> ) from table;
>> 
>> GeomFromText takes a string and outputs a geometry blob
>> Distance takes two geometry blobs and returns a double
>> 
>> In order to speed up the distance function I was wondering if I could
>> use aux_data to cache the parsed version of the first parameter since
>> this remains the same for every call. So far I haven't been able to get
>> this to work though, aux_data is always NULL, no matter what I try.
>> 
>> My hunch is that this is because the first parameter is the result of a
>> function call which could in theory return different values for each row
>> even if the input parameters are constant. Is that correct?
>> 
>> Are there any other ways to kind of memoize the GeomFromText function
>> (or the parameters to distance) besides aux_data?
>> 
>> Thanks,
>> 
>> Pepijn
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Scott Robison
Two things:

1. The longer the table names, the longer it will take to compute the hash
of each table name.

2. Because the entire schema must be reprocessed after each change, all the
table names will be rehashed after each table has been created. Creating
10,000 tables will result in  re-reading all that data and re-hashing all
the table names. After adding the 10,000th table, SQLite will have computed
at least 50,005,000 hash operations. Many more if column names are hashed
too.

SDR
On Sep 6, 2013 2:00 PM, "Harmen de Jong - CoachR Group B.V." <
har...@coachr.com> wrote:

> On 6 sep. 2013, at 20:09, "Kevin Benson"  wrote:
> > Dr. Hipp does a little bit of explaining on this topic, generally, in his
> > replies on this thread:
> >
> > http://www.mail-archive.com/sqlite-users@sqlite.org/msg78602.html
>
> Thanks for pointing me to that thread, but as dr. Hipp states in this
> thread, the tables are stored in a hash. Therefore I would not expect a
> large performance decrease on large number of tables at all, or am I
> missing something?
>
> Best regards,
> Harmen
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Pepijn Van Eeckhoudt
On 06 Sep 2013, at 17:51, Simon Davies  wrote:

>> Are there any other ways to kind of memoize the GeomFromText function
>> (or the parameters to distance) besides aux_data?
> 
> select Distance( constGeom, geometry ) from table, (select
> GeomFromText('Point(13.457 3)') as constGeom );

I just tried this out, but unfortunately it doesn't make a difference. I guess 
this boils down to the same thing as what I was doing before. table and the sub 
select get joined and the function is invoked for every row again.
Or I could be doing something in wrong in my code of course. Snippet copied 
below. With a simple query this prints 'Recreate' for each row.

Pepijn

disclaimer: this is just quick and dirty proof of concept code to see if this 
would work or not.

static void ST_Contains(sqlite3_context *context, int nbArgs, sqlite3_value 
**args) {
  GEOS_START(context);
  const GEOSPreparedGeometry *pg1 = sqlite3_get_auxdata(context, 0);
  if (pg1 == NULL) {
printf("Recreate\n");
GEOSGeometry *g1 = GEOS_GET_GEOM( args, 0 );
if (g1 == NULL) {
  sqlite3_result_error(context, error_message(&error), -1);
  return;
}
pg1 = GEOSPrepare_r(GEOS_HANDLE, g1);
if (pg1 == NULL) {
  sqlite3_result_error(context, error_message(&error), -1);
  return;
}
  } else {
printf("Reuse\n");
  }

  GEOSGeometry *g2 = GEOS_GET_GEOM( args, 1 );
  if (g2 == NULL) {
sqlite3_result_error(context, error_message(&error), -1);
return;
  }
  char result = GEOSPreparedContains_r(GEOS_HANDLE, pg1, g2);
  if (result == 2) {
geom_get_geos_error(&error);
sqlite3_result_error(context, error_message(&error), -1);
  } else {
sqlite3_result_int(context, result);
  }
  GEOS_FREE_GEOM( g2 );
  sqlite3_set_auxdata(context, 0, pg1, NULL);
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Peter Aronson
Actually, as it turns out, you can get the result you want by having the 
GeomFromText function use auxdata to store the geometry blob generated from the 
WKT string, since it's a constant.  Then all the GeomFromText has to do is to 
return the Geometry blob when sqlite3_get_auxdata returns non-NULL.

Peter

- Original Message -
> From: Pepijn Van Eeckhoudt 
> To: sqlite-users@sqlite.org
> Cc: 
> Sent: Friday, September 6, 2013 8:38 AM
> Subject: [sqlite] How to use aux_data effectively?
> 
> Hi,
> 
> In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
> I'm currently adding support for queries like:
> select Distance(
>   GeomFromText('Point(13.457 3)'),
>   geometry
> ) from table;
> 
> GeomFromText takes a string and outputs a geometry blob
> Distance takes two geometry blobs and returns a double
> 
> In order to speed up the distance function I was wondering if I could
> use aux_data to cache the parsed version of the first parameter since
> this remains the same for every call. So far I haven't been able to get
> this to work though, aux_data is always NULL, no matter what I try.
> 
> My hunch is that this is because the first parameter is the result of a
> function call which could in theory return different values for each row
> even if the input parameters are constant. Is that correct?
> 
> Are there any other ways to kind of memoize the GeomFromText function
> (or the parameters to distance) besides aux_data?
> 
> Thanks,
> 
> Pepijn
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 11:44 PM, Dominique Devienne wrote:

> FWIW, I tried to take disk I/O out of the question by using a :memory DB,
> and got some weird results of almost instantaneous runs after two initial
> runs with similar run time as in the disk db case, and I saw again a 60sec
> runtime for the long-name case when other times I'm get 27sec. All in all,
> I'm puzzled, but since I have far fewer tables, I'm not too worried despite
> my inconclusive experiments. --DD
>

Silly me. I forgot the final : in :memory:, so I was actually creating a
disk DB, and sometimes adding 10K long-named tables to 10K short-named
ones, explaining the 27s to 60s jump.

With a true memory db, there's a 2x perf difference. Pure conjecture, but
perhaps some data structures no longer fit in cache with the 10K long
names. Modern CPUs spend most of their time waiting for cache misses, and
this could be one example of it. If that's the really the case, the
performance decrease should exhibit a "nugget effect" as you increase the
table name length, instead of being gradual. But again, that's a wild
guess, which I didn't try to verify. --DD

[ddevienne@arachne misc]$ gcc sqlite3.c sqlite-many-tables.c -O3 -DNDEBUG
-pthread -ldl
[ddevienne@arachne misc]$ time ./a.out
10.767u 0.019s 0:10.79 99.8%0+0k 0+0io 0pf+0w
[ddevienne@arachne misc]$ time ./a.out
10.760u 0.018s 0:10.79 99.8%0+0k 0+0io 0pf+0w
[ddevienne@arachne misc]$ time ./a.out
10.767u 0.026s 0:10.79 99.9%0+0k 0+0io 0pf+0w
[ddevienne@arachne misc]$ gcc sqlite3.c sqlite-many-tables.c -O3 -DNDEBUG
-pthread -ldl -DLONG_NAMES
[ddevienne@arachne misc]$ time ./a.out
20.044s 0:20.90 99.9%0+0k 0+0io 0pf+0w
[ddevienne@arachne misc]$ time ./a.out
20.910u 0.037s 0:20.96 99.9%0+0k 0+0io 0pf+0w
[ddevienne@arachne misc]$ time ./a.out
20.657u 0.049s 0:20.72 99.8%0+0k 0+0io 0pf+0w
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Simon Davies
On 6 September 2013 16:38, Pepijn Van Eeckhoudt  wrote:
> Hi,
>
> In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
> I'm currently adding support for queries like:
> select Distance(
>   GeomFromText('Point(13.457 3)'),
>   geometry
> ) from table;
>
> GeomFromText takes a string and outputs a geometry blob
> Distance takes two geometry blobs and returns a double
>
> In order to speed up the distance function I was wondering if I could
> use aux_data to cache the parsed version of the first parameter since
> this remains the same for every call. So far I haven't been able to get
> this to work though, aux_data is always NULL, no matter what I try.
>
> My hunch is that this is because the first parameter is the result of a
> function call which could in theory return different values for each row
> even if the input parameters are constant. Is that correct?
>
> Are there any other ways to kind of memoize the GeomFromText function
> (or the parameters to distance) besides aux_data?

select Distance( constGeom, geometry ) from table, (select
GeomFromText('Point(13.457 3)') as constGeom );

>
> Thanks,
>
> Pepijn
>

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Simon Slavin

On 6 Sep 2013, at 9:00pm, Harmen de Jong - CoachR Group B.V. 
 wrote:

> as dr. Hipp states in this thread, the tables are stored in a hash. Therefore 
> I would not expect a large performance decrease on large number of tables at 
> all, or am I missing something?

The /names/ of the tables are hashed.  Not the data in the tables, or the list 
of page numbers that the data is stored in.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread j . merrill
I propose that you remove the unique index because SQLite does not handle the 
update case the way you want. (I'd call that a bug, but sometimes "liteness" 
means that Dr Hipp can say "I'm not going to fix it" and we have to respect his 
decision.)

Is there a reason other than "if you have a particular kind of bug in your 
code, you could end up violating the [Name/Sequence is unique] rule" to keep 
the index, when its presence causes you trouble? (Perhaps the answer is "users 
edit this table manually using other software" so you need the index to keep 
them from screwing up. But I doubt it, or they'd have complained what a pain it 
is to add a new row in the middle!)

There is little to prevent you from having other bugs that might be equally bad 
--

- putting both "Blue" and "blue" in the Name column (with separate sets of 
Sequence values) when both values shouldn't be there because the business 
context says they're the same
- have Sequence values not starting at 1 (e.g. 2 3 4) for a particular Name -- 
perhaps that wouldn't cause any trouble in other logic, but it probably would

If you wanted to, you could have your initial "open the database" code check 
for duplicates across those columns (and that 1 is the lowest Sequence for each 
Name) -- then at least you'd know that you'd had one of those bugs.

J. Merrill

-Original Message-
From: Peter Haworth
Sent: Thursday, September 05, 2013 2:21 PM
To: sqlite-users
Subject: [sqlite] UPDATE question

I have a table with the following (simplified) structure

PKeyINTEGER PRIMARY KEY 
NameTEXT
Sequence INTEGER

The Name/Sequence pair of columns is defined as UNIQUE


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 11:15 PM, Jared Albers wrote:

> I already grasp the reasoning for why the creation of tables can be
> expensive and inefficient. What I don't understand is why the length
> of a table name affects performance so greatly, while the length of
> data in a row has almost no affect on performance. Does a longer table
> name somehow require more pages? This is what I mean to point out.
>
> On Stack Overflow, Richard Hipp made the argument that "it should come
> as no surprise that SQLite takes 4x longer to write 4x as much
> content." This argument doesn't make sense to me when 4x the data in a
> row has a negligible effect on performance (when inserting the row).
>

Right. It is surprising indeed. In my own tests, the DB file itself grows
from 11MB to 15MB with long names, so there's some additional I/O overhead
for sure, but 4MB of I/O clearly doesn't justify a 2.5x increase in the
time it takes to create all these tables. But given that 10K tables is
clearly considered an outlier use case by many, there doesn't seem to be
much will to investigate further, and actually profile this use case. FWIW,
I tried to take disk I/O out of the question by using a :memory DB, and got
some weird results of almost instantaneous runs after two initial runs with
similar run time as in the disk db case, and I saw again a 60sec runtime
for the long-name case when other times I'm get 27sec. All in all, I'm
puzzled, but since I have far fewer tables, I'm not too worried despite my
inconclusive experiments. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Jared Albers
I already grasp the reasoning for why the creation of tables can be
expensive and inefficient. What I don't understand is why the length
of a table name affects performance so greatly, while the length of
data in a row has almost no affect on performance. Does a longer table
name somehow require more pages? This is what I mean to point out.

On Stack Overflow, Richard Hipp made the argument that "it should come
as no surprise that SQLite takes 4x longer to write 4x as much
content." This argument doesn't make sense to me when 4x the data in a
row has a negligible affect on performance (when inserting the row).

-Jared

On Fri, Sep 6, 2013 at 1:07 PM, Simon Slavin  wrote:
>
> On 6 Sep 2013, at 7:39pm, Jared Albers  wrote:
>
>> I really just wanted to understand the "why" in the table creation 
>> performance.
>>
>> I can create a database where the table names are instead described as
>> rows, and the creation of the database is *much* faster. An order of
>> magnitude faster. The amount of data inserted into a row doesn't
>> affect performance nearly as much as the amount of data used to
>> describe a table name. This is the part I'm trying to understand.
>
> The internal organisation is completely different.
>
> SQLite splits a database file up into pages.  Each page 'belongs' to one 
> table or to one index.  So if you create 1000 tables each with 1 index, you 
> are writing to 2000 different pages.  But if you put all your data in one 
> table with 2 indexes, it might all fit into 3 tables.
>
> So using many small tables is inefficient: you have to keep swapping between 
> different pages, and eventually you do so much swapping you fill up your 
> cache.  But writing lots of rows to one table you might fit all the pages you 
> need into cache.  And you write fewer pages back to proper storage when 
> you're done.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Harmen de Jong - CoachR Group B . V .
On 6 sep. 2013, at 20:09, "Kevin Benson"  wrote:
> Dr. Hipp does a little bit of explaining on this topic, generally, in his
> replies on this thread:
> 
> http://www.mail-archive.com/sqlite-users@sqlite.org/msg78602.html

Thanks for pointing me to that thread, but as dr. Hipp states in this thread, 
the tables are stored in a hash. Therefore I would not expect a large 
performance decrease on large number of tables at all, or am I missing 
something?

Best regards,
Harmen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Stephan Beal
On Fri, Sep 6, 2013 at 8:39 PM, Jared Albers wrote:

> magnitude faster. The amount of data inserted into a row doesn't
> affect performance nearly as much as the amount of data used to
> describe a table name. This is the part I'm trying to understand.
>

Speculation: maybe you have passed some memory allocation length which
flips your system's memory allocator into another (less efficient) mode, or
some other such side-effect. By running your app through a profiler (i use
valgrind --tool=callgrind) you can find out where the app is actually
spending its time - maybe it's in other infrastructure.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Simon Slavin

On 6 Sep 2013, at 4:08pm, Simon Slavin  wrote:

> Right.  As I posted in my message that I had tested, this isn't being done 
> correctly.  A conflict isn't a conflict until the write, and the write 
> doesn't happen until the COMMIT. Therefore conflict testing needs to happen 
> at the commit, for /all/ the changes in the transaction, whether they're all 
> the consequence of one UPDATE or even of separate commands within the same 
> transaction.

On further thought, I may be wrong about that.  If it was really true then if 
you explicitly declared a transaction with BEGIN ... END a SQLITE_CONSTRAINT 
result due to violations of UNIQUE could only be returned at the END.  You 
wouldn't be able to tell which command within the transaction caused the 
conflict.

I don't know whether that's the way things really should work.  Or whether this 
is something the people who thought up SQL specified.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Simon Slavin

On 6 Sep 2013, at 7:39pm, Jared Albers  wrote:

> I really just wanted to understand the "why" in the table creation 
> performance.
> 
> I can create a database where the table names are instead described as
> rows, and the creation of the database is *much* faster. An order of
> magnitude faster. The amount of data inserted into a row doesn't
> affect performance nearly as much as the amount of data used to
> describe a table name. This is the part I'm trying to understand.

The internal organisation is completely different.

SQLite splits a database file up into pages.  Each page 'belongs' to one table 
or to one index.  So if you create 1000 tables each with 1 index, you are 
writing to 2000 different pages.  But if you put all your data in one table 
with 2 indexes, it might all fit into 3 tables.

So using many small tables is inefficient: you have to keep swapping between 
different pages, and eventually you do so much swapping you fill up your cache. 
 But writing lots of rows to one table you might fit all the pages you need 
into cache.  And you write fewer pages back to proper storage when you're done.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Igor Tandetnik

On 9/6/2013 1:05 PM, ibrahim wrote:

Perhaps you should really try it out cause my suggestion is totally
different from yours it uses a temporary table it works.

create table t1 (pk integer primary key, name text, seq integer) ;


You missed the whole "Name/Sequence pair of columns is defined as 
UNIQUE" part. Make it


create table t1 (pk integer primary key, name text, seq integer, 
UNIQUE(name, seq) ) ;


See how well your technique is working for you now.
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread ibrahim

On 06.09.2013 20:52, Igor Tandetnik wrote:

On 9/6/2013 1:05 PM, ibrahim wrote:

Perhaps you should really try it out cause my suggestion is totally
different from yours it uses a temporary table it works.

create table t1 (pk integer primary key, name text, seq integer) ;


You missed the whole "Name/Sequence pair of columns is defined as 
UNIQUE" part. Make it


create table t1 (pk integer primary key, name text, seq integer, 
UNIQUE(name, seq) ) ;


See how well your technique is working for you now.

I missed that part.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread ibrahim

On 05.09.2013 20:20, Peter Haworth wrote:

I have a table with the following (simplified) structure

PKeyINTEGER PRIMARY KEY
NameTEXT
Sequence INTEGER

The Name/Sequence pair of columns is defined as UNIQUE

I need to insert a new row into a point between two sequence numbers.  For
example, if the existing rows are:

Name   Sequence
---
Blue   1
Blue   2
Blue3
Blue   4
Blue   5

... I might need to insert a new Blue/3.

Currently I'm doing this by selecting the Pkey values of the entries with
sequence >=3, sorting them by descending sequence, then a loop with an
UPDATE to set the sequence to sequence+1 for each PKey value so 5 becomes
6, 4 becomes 5, 3 becomes 4.  Then I INSERT a new Blue/3 row.

That works fine but wondering if there might be a single UPDATE statement
that could do this for me.  I can use the WHERE clause to select sequence
3,4, and 5 but  the UPDATE has to process the rows in descending sequence
order to avoid UNIQUE conflicts.


Pete
lcSQL Software 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Assumed table :

create table (pk integer primary key, name text, seq integer) ;

You can let sqlite handle the creation of a temporary table by :

update t1 set seq=seq+1 where pk in (select pk from t1 where seq >= 
3 order by seq desc) ;


afterwards you can insert :

insert into t1 (pk, name, seq) values (6, 'blue', 3) ;

you need the temporary table created by the select pk ... cause of 
possible side effects.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Table name length in sqlite affects performance. Why?

2013-09-06 Thread Jared Albers
I really just wanted to understand the "why" in the table creation performance.

I can create a database where the table names are instead described as
rows, and the creation of the database is *much* faster. An order of
magnitude faster. The amount of data inserted into a row doesn't
affect performance nearly as much as the amount of data used to
describe a table name. This is the part I'm trying to understand.

-Jared

On Wed, Sep 4, 2013 at 10:21 PM, James K. Lowden
 wrote:
> On Tue, 3 Sep 2013 18:43:52 -0600
> Jared Albers  wrote:
>
>> When using relatively long table names like `TABLE_{table #}_{some
>> unique identifying name that adds 120 or so characters}`, creation of
>> a database with 10,000 tables takes approximately 60 seconds.
>
> I find this a very strange course of interrogation.  Tables are created
> once.  Databases with 10,000 tables should be created never (to a
> reasonable approximation).
>
> Is this just an exercise, or is there some horrible real application
> out there being slowed down because it's creating thousands of tables a
> minute?
>
> --jkl
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE Question

2013-09-06 Thread Peter Haworth
Thanks Marc and all who responded.  Marc's suggestion seems to come the
closest to what I was looking for.

I did try one other thing which I thought might solve the problem.  Instead
of defining the Name/Sequence as UNIQUE, I set up a UNIQUE index for those
columns and defined both of them to be sorted DESC.  I then tried:

UPDATE Test INDEXED BY TestIndex SET Sequence=Sequence+1 WHERE Name='Blue'
AND Sequence>=3

I was surprised that I still got the error that Name/Sequence were not
unique since I thought the index would force the updates to be done
starting at the highest sequence number.  EXPLAIN QUERY PLAN indicates that
the TestIndex index was used.

Pete

On Fri, Sep 6, 2013 at 9:00 AM,  wrote:

> Message: 27
> Date: Fri, 6 Sep 2013 07:50:39 -0500
> From: "Marc L. Allen" 
> To: General Discussion of SQLite Database 
> Subject: Re: [sqlite] UPDATE question
> Message-ID:
>  >
> Content-Type: text/plain; charset="us-ascii"
>
> No one commented on my second thread (written after I actually understood
> the problem!).
>
> But, I proposed a two update sequence to do it.
>
> UPDATE table SET Sequence = -(Sequence + 1) WHERE Sequence >=
> seq_to_insert AND Name = name_to_insert
>
> UPDATE table SET Sequence = -Sequence WHERE Sequence < 0  AND Name =
> name_to_insert
>



Pete
lcSQL Software 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread ibrahim

On 06.09.2013 18:30, Dominique Devienne wrote:

On Fri, Sep 6, 2013 at 5:41 PM, ibrahim  wrote:



You can let sqlite handle the creation of a temporary table by :

update t1 set seq=seq+1 where pk in (select pk from t1 where seq >= 3
order by seq desc) ;

afterwards you can insert.


Unless I'm misunderstanding you, I already tried that in an earlier post of
this thread, and that didn't work. Give it a try. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Perhaps you should really try it out cause my suggestion is totally 
different from yours it uses a temporary table it works.


create table t1 (pk integer primary key, name text, seq integer) ;
insert into t1 (pk, name, seq) values (1, "blue", 1) ;
insert into t1 (pk, name, seq) values (2, "blue", 2) ;
insert into t1 (pk, name, seq) values (3, "blue", 3) ;
insert into t1 (pk, name, seq) values (4, "blue", 4) ;
insert into t1 (pk, name, seq) values (5, "blue", 5) ;

sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|3
4|blue|4
5|blue|5


sqlite> update t1 set seq=seq+1 where pk in (select pk from t1 where seq 
>= 3 order by seq desc) ;

sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|4
4|blue|5
5|blue|6


sqlite> insert into t1 (pk, name, seq) values (6, 'blue', 3) ;
sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|4
4|blue|5
5|blue|6
6|blue|3


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread ibrahim

On 06.09.2013 18:30, Dominique Devienne wrote:

On Fri, Sep 6, 2013 at 5:41 PM, ibrahim  wrote:



You can let sqlite handle the creation of a temporary table by :

update t1 set seq=seq+1 where pk in (select pk from t1 where seq >= 3
order by seq desc) ;

afterwards you can insert.


Unless I'm misunderstanding you, I already tried that in an earlier post of
this thread, and that didn't work. Give it a try. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Perhaps you should really try it out cause my suggestion is totally 
different from yours it uses a temporary table it works.


create table t1 (pk integer primary key, name text, seq integer) ;
insert into t1 (pk, name, seq) values (1, "blue", 1) ;
insert into t1 (pk, name, seq) values (2, "blue", 2) ;
insert into t1 (pk, name, seq) values (3, "blue", 3) ;
insert into t1 (pk, name, seq) values (4, "blue", 4) ;
insert into t1 (pk, name, seq) values (5, "blue", 5) ;

sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|3
4|blue|4
5|blue|5


sqlite> update t1 set seq=seq+1 where pk in (select pk from t1 where seq 
>= 3 order by seq desc) ;

sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|4
4|blue|5
5|blue|6


sqlite> insert into t1 (pk, name, seq) values (6, 'blue', 3) ;
sqlite> select * from t1 ;
1|blue|1
2|blue|2
3|blue|4
4|blue|5
5|blue|6
6|blue|3



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Kevin Benson
On Fri, Sep 6, 2013 at 1:29 PM, Harmen de Jong - CoachR Group B.V. <
har...@coachr.com> wrote:

> On 6 sep. 2013, at 18:42, "Igor Tandetnik"  wrote:
> > If I recall correctly, query planner's behavior is worst-case quadratic
> in the number of tables participating in the query. This includes tables
> mentioned directly, and also those pulled in indirectly via views, triggers
> or foreign keys.
>
> Ok, maybe that explains it. Do you remember some topics or explanations
> that cover this more in depth? At least it will be worth for us to do some
> debugging on the SQLite code to see if we can pin point this behaviour and
> see if we can work around.



Dr. Hipp does a little bit of explaining on this topic, generally, in his
replies on this thread:

http://www.mail-archive.com/sqlite-users@sqlite.org/msg78602.html

--
   --
  --
 --Ô¿Ô--
K e V i N
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Harmen de Jong - CoachR Group B . V .
On 6 sep. 2013, at 18:42, "Igor Tandetnik"  wrote:
> If I recall correctly, query planner's behavior is worst-case quadratic in 
> the number of tables participating in the query. This includes tables 
> mentioned directly, and also those pulled in indirectly via views, triggers 
> or foreign keys.

Ok, maybe that explains it. Do you remember some topics or explanations that 
cover this more in depth? At least it will be worth for us to do some debugging 
on the SQLite code to see if we can pin point this behaviour and see if we can 
work around.


> If I may be so bold, I would say that a design that calls for a database with 
> 10,000 tables doesn't feel right 

The main reason for this is that the application is used to do research on and 
examine the quality of processes that take place within organizations. For this 
research the users can create highly customized  research forms with varying 
number of columns, varying field types and a lot of specific restrictions, 
which we store in tables that are generated on the fly with the proper indexes 
(later on the research results are stored in these tables and used for 
real-time dashboard reporting and downloading reports that are generated with 
the latest data)

Furthermore the data that is examined and imported into the application 
contains a lot of privately related data. Since we already create these 
separated tables, we use this as some sort of 'automated' security wall to rule 
out the risk of users from different organizations reaching or hacking into 
each others privacy sensitive data in case programming weaknesses or mistakes 
are made.

For this project we went on board with SQLite (which we already used in several 
other projects) and have built our own 'SQL server' app that takes care of 
things like:
- Asynchronous real-time backups to secondary location (sort of binary logging 
as found in other DB's, though our implementation is not binary)
-Automatically splitting time intensive writes and updates to prevent these 
locking the app (since SQLite has DB level locking) for users that perform 
simple tasks that can be performed fast.
-This 'stable' server app also rules out startup times of the database in the 
case of application crashes (since the DB is several gigabytes in size it takes 
SQLite a few seconds to 'initialize' and thereby hiding application crashes 
from the users (which access the app through a web interface).
-Automatically backing up the DB in compressed and encrypted format.

Best regards,
Harmen
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Igor Tandetnik

On 9/6/2013 10:35 AM, Harmen de Jong - CoachR Group B.V. wrote:

We ran into an issue where specific queries are geting non linearly slower when 
the total number of tables grows.


If I recall correctly, query planner's behavior is worst-case quadratic 
in the number of tables participating in the query. This includes tables 
mentioned directly, and also those pulled in indirectly via views, 
triggers or foreign keys.


If I may be so bold, I would say that a design that calls for a database 
with 10,000 tables doesn't feel right to me.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread ibrahim

On 05.09.2013 20:20, Peter Haworth wrote:

I have a table with the following (simplified) structure

PKeyINTEGER PRIMARY KEY
NameTEXT
Sequence INTEGER

The Name/Sequence pair of columns is defined as UNIQUE

I need to insert a new row into a point between two sequence numbers.  For
example, if the existing rows are:

Name   Sequence
---
Blue   1
Blue   2
Blue3
Blue   4
Blue   5

... I might need to insert a new Blue/3.

Currently I'm doing this by selecting the Pkey values of the entries with
sequence >=3, sorting them by descending sequence, then a loop with an
UPDATE to set the sequence to sequence+1 for each PKey value so 5 becomes
6, 4 becomes 5, 3 becomes 4.  Then I INSERT a new Blue/3 row.

That works fine but wondering if there might be a single UPDATE statement
that could do this for me.  I can use the WHERE clause to select sequence
3,4, and 5 but  the UPDATE has to process the rows in descending sequence
order to avoid UNIQUE conflicts.


Pete
lcSQL Software 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


You can let sqlite handle the creation of a temporary table by :

update t1 set seq=seq+1 where pk in (select pk from t1 where seq >= 3 
order by seq desc) ;


afterwards you can insert.

If you also want to change the pk order just reset that to in the update.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 5:51 PM, Simon Davies
wrote:

> On 6 September 2013 16:38, Pepijn Van Eeckhoudt 
> wrote:
> > Hi,
> >
> > In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
> > I'm currently adding support for queries like:
> > select Distance(
> >   GeomFromText('Point(13.457 3)'),
> >   geometry
> > ) from table;
> >
> > GeomFromText takes a string and outputs a geometry blob
> > Distance takes two geometry blobs and returns a double
> >
> > In order to speed up the distance function I was wondering if I could
> > use aux_data to cache the parsed version of the first parameter since
> > this remains the same for every call. So far I haven't been able to get
> > this to work though, aux_data is always NULL, no matter what I try.
> >
> > My hunch is that this is because the first parameter is the result of a
> > function call which could in theory return different values for each row
> > even if the input parameters are constant. Is that correct?
> >
> > Are there any other ways to kind of memoize the GeomFromText function
> > (or the parameters to distance) besides aux_data?
>
> select Distance( constGeom, geometry ) from table, (select
> GeomFromText('Point(13.457 3)') as constGeom );
>

Clever. Thanks for that.

Or make your Distance function implementation do the GeomFromText
implicitly, with a bit of duck-typing.

You can test the argument's type, if blob use it as-is (hoping it's a
"geometry" blob in all cases), and if text, do the GeomFromText()
implicitly.

Since it's back to being a (text) constant, you can use aux_data again. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Pepijn Van Eeckhoudt

On 06 Sep 2013, at 17:58, Dominique Devienne  wrote:

>> select Distance( constGeom, geometry ) from table, (select
>> GeomFromText('Point(13.457 3)') as constGeom );
> 
> Clever. Thanks for that.

Simple and elegant. Thanks for the idea, I'll give it a try.

> Or make your Distance function implementation do the GeomFromText
> implicitly, with a bit of duck-typing.

The semantics of Distance (or better ST_Distance) are defined in the ISO SQL/MM 
part 3 and OGC Simple Feature/SQL specs. I would prefer to avoid non-specified 
behaviour. 

Pepijn

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 5:41 PM, ibrahim  wrote:


> You can let sqlite handle the creation of a temporary table by :
>
> update t1 set seq=seq+1 where pk in (select pk from t1 where seq >= 3
> order by seq desc) ;
>
> afterwards you can insert.


Unless I'm misunderstanding you, I already tried that in an earlier post of
this thread, and that didn't work. Give it a try. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 6:14 PM, Pepijn Van Eeckhoudt <
pep...@vaneeckhoudt.net> wrote:

> On 06 Sep 2013, at 17:58, Dominique Devienne  wrote:
>
> >> select Distance( constGeom, geometry ) from table, (select
> >> GeomFromText('Point(13.457 3)') as constGeom );
> >
> > Clever. Thanks for that.
>
> Simple and elegant. Thanks for the idea, I'll give it a try.
>
> > Or make your Distance function implementation do the GeomFromText
> > implicitly, with a bit of duck-typing.
>
> The semantics of Distance (or better ST_Distance) are defined in the ISO
> SQL/MM part 3 and OGC Simple Feature/SQL specs. I would prefer to avoid
> non-specified behaviour
>

Well, given the fact that SQLite does not have user-defined-types, the spec
is already abused by using a blob for the "geometry" type. And that's more
natural a query with the duck-typing IMHO, rather than joining with a
scalar query. And aux-data use in Distance would make it just as efficient.
But I get your point too. My $0.02 :) --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 5:40 PM, Richard Hipp  wrote:

> On Fri, Sep 6, 2013 at 11:38 AM, Pepijn Van Eeckhoudt <
> pep...@vaneeckhoudt.net> wrote:
>
> > In order to speed up the distance function I was wondering if I could
> > use aux_data to cache the parsed version of the first parameter since
> > this remains the same for every call. So far I haven't been able to get
> > this to work though, aux_data is always NULL, no matter what I try.
> >
> > My hunch is that this is because the first parameter is the result of a
> > function call which could in theory return different values for each row
> > even if the input parameters are constant. Is that correct?
> >
>
> Correct.


Would be nice to be able to mark a function as "deterministic" and allow
aux-data for such function results when called with constant arguments.

In fact, such deterministic function calls with constant arguments could
even be evaluated at "parse time", with their results used directly
(inlined) in the VDBE, in theory.

Just speculations from someone who does not understand SQLite's
implementation. :) --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query preperation time does not scale linearly with growth of no. of tables

2013-09-06 Thread Harmen de Jong - CoachR Group B . V .
We ran into an issue where specific queries are geting non linearly slower when 
the total number of tables grows.





Example 1 (not slow):



Database has table A

Database has 1,000 other tables with foreign key to table A

Row is deleted from table A (no deletion of actual data in other tables is 
involved).





Example 2 (more than 50 times slower than example A):



Database has table A

Database has 10,000 other tables with foreign key to table A

Row is deleted from table A (no deletion of actual data in other tables is 
involved).





Does anybody have an idea why the preparation time of the query is scaling up 
so fast in the above examples?





Best regards,



Harmen de Jong

CoachR Group B.V.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to use aux_data effectively?

2013-09-06 Thread Richard Hipp
On Fri, Sep 6, 2013 at 11:38 AM, Pepijn Van Eeckhoudt <
pep...@vaneeckhoudt.net> wrote:

> Hi,
>
> In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
> I'm currently adding support for queries like:
> select Distance(
>   GeomFromText('Point(13.457 3)'),
>   geometry
> ) from table;
>
> GeomFromText takes a string and outputs a geometry blob
> Distance takes two geometry blobs and returns a double
>
> In order to speed up the distance function I was wondering if I could
> use aux_data to cache the parsed version of the first parameter since
> this remains the same for every call. So far I haven't been able to get
> this to work though, aux_data is always NULL, no matter what I try.
>
> My hunch is that this is because the first parameter is the result of a
> function call which could in theory return different values for each row
> even if the input parameters are constant. Is that correct?
>

Correct.


>
> Are there any other ways to kind of memoize the GeomFromText function
> (or the parameters to distance) besides aux_data?
>
> Thanks,
>
> Pepijn
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to use aux_data effectively?

2013-09-06 Thread Pepijn Van Eeckhoudt
Hi,

In the extension I'm developing (https://bitbucket.org/luciad/libgpkg)
I'm currently adding support for queries like:
select Distance(
  GeomFromText('Point(13.457 3)'),
  geometry
) from table;

GeomFromText takes a string and outputs a geometry blob
Distance takes two geometry blobs and returns a double

In order to speed up the distance function I was wondering if I could
use aux_data to cache the parsed version of the first parameter since
this remains the same for every call. So far I haven't been able to get
this to work though, aux_data is always NULL, no matter what I try.

My hunch is that this is because the first parameter is the result of a
function call which could in theory return different values for each row
even if the input parameters are constant. Is that correct?

Are there any other ways to kind of memoize the GeomFromText function
(or the parameters to distance) besides aux_data?

Thanks,

Pepijn

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Marc L. Allen
MySQL also uses this implementation.  They acknowledge that it is not SQL 
compliant and that (I never thought of this), you cannot delete a record that 
has a foreign key link to itself.

Postgres apparently has the ability to have deferred checking as of V9, but not 
before then.

Please see:

http://stackoverflow.com/questions/5014700/in-mysql-can-i-defer-referential-integrity-checks-until-commit

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Richard Hipp
Sent: Friday, September 06, 2013 11:07 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] UPDATE question

On Fri, Sep 6, 2013 at 11:00 AM, Dominique Devienne wrote:

> Myself, if I'm "thinking in sets", all implementation details aside, 
> the UPDATE statement looks fine and correct, and I'd have expected 
> SQLite to support it.
>
> But I'm just waiting to read Dr. Hipp's own read on this now. --DD



I'm busy with a different problem and don't have time to study your thread, so 
I'm guessing at the answer:

The UPDATE statement in SQLite operates row-by-row.  The effect of early row 
updates might be visible in later row updates if you contrive a sufficiently 
complex example.  But you really have to go out of your way to do that.  If a 
constraint error happens, the entire UPDATE statement is rolled back (except if 
OR FAIL is specified - see the docs).

Yes, I know this is not "relational".  No, I do not intend to fix it.


> -
>
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


This email and any attachments are only for use by the intended recipient(s) 
and may contain legally privileged, confidential, proprietary or otherwise 
private information. Any unauthorized use, reproduction, dissemination, 
distribution or other disclosure of the contents of this e-mail or its 
attachments is strictly prohibited. If you have received this email in error, 
please notify the sender immediately and delete the original.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Dan Kennedy

On 09/06/2013 10:19 PM, Marc L. Allen wrote:

MySQL also uses this implementation.  They acknowledge that it is not SQL 
compliant and that (I never thought of this), you cannot delete a record that 
has a foreign key link to itself.

Postgres apparently has the ability to have deferred checking as of V9, but not 
before then.


SQLite defers checking FK constraints until the end of
the statement (or the end of the transaction, for
DEFERRABLE INITIALLY DEFERRED constraints). They are the
exception though - all others (CHECK, UNIQUE etc.) are
checked as part of updating/deleting/inserting each row.

Dan.





Please see:

http://stackoverflow.com/questions/5014700/in-mysql-can-i-defer-referential-integrity-checks-until-commit

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Richard Hipp
Sent: Friday, September 06, 2013 11:07 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] UPDATE question

On Fri, Sep 6, 2013 at 11:00 AM, Dominique Devienne wrote:


Myself, if I'm "thinking in sets", all implementation details aside,
the UPDATE statement looks fine and correct, and I'd have expected
SQLite to support it.

But I'm just waiting to read Dr. Hipp's own read on this now. --DD



I'm busy with a different problem and don't have time to study your thread, so 
I'm guessing at the answer:

The UPDATE statement in SQLite operates row-by-row.  The effect of early row 
updates might be visible in later row updates if you contrive a sufficiently 
complex example.  But you really have to go out of your way to do that.  If a 
constraint error happens, the entire UPDATE statement is rolled back (except if 
OR FAIL is specified - see the docs).

Yes, I know this is not "relational".  No, I do not intend to fix it.



-


D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


This email and any attachments are only for use by the intended recipient(s) 
and may contain legally privileged, confidential, proprietary or otherwise 
private information. Any unauthorized use, reproduction, dissemination, 
distribution or other disclosure of the contents of this e-mail or its 
attachments is strictly prohibited. If you have received this email in error, 
please notify the sender immediately and delete the original.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Simon Slavin

On 6 Sep 2013, at 1:50pm, Marc L. Allen  wrote:

> No one commented on my second thread (written after I actually understood the 
> problem!).
> 
> But, I proposed a two update sequence to do it.  
> 
> UPDATE table SET Sequence = -(Sequence + 1) WHERE Sequence >= seq_to_insert 
> AND Name = name_to_insert
> 
> UPDATE table SET Sequence = -Sequence WHERE Sequence < 0  AND Name = 
> name_to_insert
> 
> I've used this system many times to avoid conflicts, but it may not work 
> where the table needs to be accessed concurrently, as rows will sort of 
> disappear temporarily (or at least change to an unusable state).

Yep, this is a find, acceptable, and reasonably fast work-around.  Nice one.



On 6 Sep 2013, at 5:21am, James K. Lowden  wrote:

> Simon Slavin  wrote:
> 
>> If all the updates are done inside a COMMIT,
>> then the conflict may not be recognised because by the time the first
>> change is written back to the table, the conflicting entry has
>> already been renumbered.
> 
> I was puzzled by this thread, so ran my own little test.  To my dismay,
> UPDATE is not atomic in SQLite.  

Right.  As I posted in my message that I had tested, this isn't being done 
correctly.  A conflict isn't a conflict until the write, and the write doesn't 
happen until the COMMIT.  Therefore conflict testing needs to happen at the 
commit, for /all/ the changes in the transaction, whether they're all the 
consequence of one UPDATE or even of separate commands within the same 
transaction.

Possibly another problem that could be fixed in SQLite4.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Marc L. Allen
Nice, but that still requires extra work.

1) Determine if row is already in table.
2) Determine next lower value.
3) Split difference and insert.

There's also the possibility that the higher level APP expects the new row to 
have a sequence number of 3.

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of kyan
Sent: Friday, September 06, 2013 10:41 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] UPDATE question

On Thu, Sep 5, 2013 at 9:20 PM, Peter Haworth  wrote:

> I have a table with the following (simplified) structure
>
> PKeyINTEGER PRIMARY KEY
> NameTEXT
> Sequence INTEGER
>
> The Name/Sequence pair of columns is defined as UNIQUE
>
> I need to insert a new row into a point between two sequence numbers.  
> For example, if the existing rows are:
>
> Name   Sequence
> ---
> Blue   1
> Blue   2
> Blue3
> Blue   4
> Blue   5
>
> ... I might need to insert a new Blue/3.
>
> If the Sequence column needs not be consecutive integers but just
specifies order, consider turning it to a float. Then you can insert a row 
between two existing rows with Sequence a and b by using their median (a +
b) / 2 as the new row's Sequence:

Blue   1
Blue   2
--> Blue(2 + 3) / 2 = 2.5
Blue3
Blue   4
Blue   5

and then:

Blue   1
Blue   2
--> Blue2.25
Blue2.5
 Blue3
Blue   4
Blue   5

and so on. This way you avoid having to modify following rows on each insertion.

--
Constantine Yannakopoulos
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


This email and any attachments are only for use by the intended recipient(s) 
and may contain legally privileged, confidential, proprietary or otherwise 
private information. Any unauthorized use, reproduction, dissemination, 
distribution or other disclosure of the contents of this e-mail or its 
attachments is strictly prohibited. If you have received this email in error, 
please notify the sender immediately and delete the original.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 2:50 PM, Marc L. Allen
wrote:

> No one commented on my second thread (written after I actually understood
> the problem!).
>
> But, I proposed a two update sequence to do it.
>
> UPDATE table SET Sequence = -(Sequence + 1) WHERE Sequence >=
> seq_to_insert AND Name = name_to_insert
> UPDATE table SET Sequence = -Sequence WHERE Sequence < 0  AND Name =
> name_to_insert
>

Because, no offense, that's just a work-around.

The question that matters is whether SQL requires the UPDATE statement to
succeed or not, despite a given implementation possibly having to deal with
transient index violations while processing the rows.

If so, SQLite would be non-compliant in that regard, as of now.
If not, and implementations are free to support or not such UPDATE
statements, then SQLite would ideally document this limitation, since it
departs from major DBMS's like Oracle.

Myself, if I'm "thinking in sets", all implementation details aside, the
UPDATE statement looks fine and correct, and I'd have expected SQLite to
support it.

But I'm just waiting to read Dr. Hipp's own read on this now. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Richard Hipp
On Fri, Sep 6, 2013 at 11:00 AM, Dominique Devienne wrote:

> Myself, if I'm "thinking in sets", all implementation details aside, the
> UPDATE statement looks fine and correct, and I'd have expected SQLite to
> support it.
>
> But I'm just waiting to read Dr. Hipp's own read on this now. --DD



I'm busy with a different problem and don't have time to study your thread,
so I'm guessing at the answer:

The UPDATE statement in SQLite operates row-by-row.  The effect of early
row updates might be visible in later row updates if you contrive a
sufficiently complex example.  But you really have to go out of your way to
do that.  If a constraint error happens, the entire UPDATE statement is
rolled back (except if OR FAIL is specified - see the docs).

Yes, I know this is not "relational".  No, I do not intend to fix it.


> -
>
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread kyan
On Thu, Sep 5, 2013 at 9:20 PM, Peter Haworth  wrote:

> I have a table with the following (simplified) structure
>
> PKeyINTEGER PRIMARY KEY
> NameTEXT
> Sequence INTEGER
>
> The Name/Sequence pair of columns is defined as UNIQUE
>
> I need to insert a new row into a point between two sequence numbers.  For
> example, if the existing rows are:
>
> Name   Sequence
> ---
> Blue   1
> Blue   2
> Blue3
> Blue   4
> Blue   5
>
> ... I might need to insert a new Blue/3.
>
> If the Sequence column needs not be consecutive integers but just
specifies order, consider turning it to a float. Then you can insert a row
between two existing rows with Sequence a and b by using their median (a +
b) / 2 as the new row's Sequence:

Blue   1
Blue   2
--> Blue(2 + 3) / 2 = 2.5
Blue3
Blue   4
Blue   5

and then:

Blue   1
Blue   2
--> Blue2.25
Blue2.5
 Blue3
Blue   4
Blue   5

and so on. This way you avoid having to modify following rows on each
insertion.

--
Constantine Yannakopoulos
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

2013-09-06 Thread Hick Gunter
Of course it does. But if the OP inserted his as yet unspecified large value 
using the sqlite3_bind_int() call, then the shell would render it as a negative 
value. He needs to use the sqlite3_bind_int64() interface instead...

-Ursprüngliche Nachricht-
Von: Richard Hipp [mailto:d...@sqlite.org]
Gesendet: Freitag, 06. September 2013 16:04
An: General Discussion of SQLite Database
Betreff: Re: [sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

On Fri, Sep 6, 2013 at 10:00 AM, Hick Gunter  wrote:

>
> My guess is that you have exceeded the range of a 32 bit integer
> (0x7FFF) which will be rendered as a negative number by the SQLite
> shell.
>

The shell works just fine with 64-bit signed integers:

SQLite version 3.8.0.1 2013-08-29 17:35:01 Enter ".help" for instructions Enter 
SQL statements terminated with a ";"
sqlite> select 1024*1024*1024*1024*1024;
1125899906842624



--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


--
 Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 – 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

2013-09-06 Thread Richard Hipp
On Fri, Sep 6, 2013 at 10:00 AM, Hick Gunter  wrote:

>
> My guess is that you have exceeded the range of a 32 bit integer
> (0x7FFF) which will be rendered as a negative number by the SQLite
> shell.
>

The shell works just fine with 64-bit signed integers:

SQLite version 3.8.0.1 2013-08-29 17:35:01
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select 1024*1024*1024*1024*1024;
1125899906842624



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

2013-09-06 Thread Hick Gunter
What is the size in bits of your UNSIGNED BIG INT? SQLite supports only SIGNED 
integers in 32 and 64 bits, so to properly store/retrieve UNSIGNED values you 
need to typecast.

My guess is that you have exceeded the range of a 32 bit integer (0x7FFF) 
which will be rendered as a negative number by the SQLite shell.

-Ursprüngliche Nachricht-
Von: Filip Curcic [mailto:curcic.fi...@gmail.com]
Gesendet: Freitag, 06. September 2013 13:21
An: sqlite-users@sqlite.org
Betreff: [sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

Hello,

I am using Android 4.2 with SQLite 3.7.11 database.
I am not being able to insert UNSIGNED BIG INT data properly. It looks to me 
like an overflow, because when I read it out I get a negative number.

That only happens on this device, I have another 2.3 Andriod device which runs 
older version of SQLite, and it does not happen.

Is this maybe a known problem?

Thank you

--
Filip Curcic
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


--
 Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 – 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite support not recognized on debian lighttpd

2013-09-06 Thread Clemens Ladisch
GoogleWell wrote:
> I installed sqlite on my debian lighttpd (sqeeze) server. It is needed for
> an application (business directory) used on a Joomla installation. This
> particular component (SOBIPro) keeps telling me that sqlite is not
> installed on my server.

SQLite is an embedded database and is usually compiled directly into
the program/library that is using it.  If your component accesses
SQLite through PHP, you need to install a  PHP version that includes
SQLite, or some SQLite plugin for PHP, instead of the generic SQLite
library.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Igor Tandetnik

On 9/6/2013 8:50 AM, Marc L. Allen wrote:

But, I proposed a two update sequence to do it.

UPDATE table SET Sequence = -(Sequence + 1) WHERE Sequence >= seq_to_insert AND 
Name = name_to_insert
UPDATE table SET Sequence = -Sequence WHERE Sequence < 0  AND Name = 
name_to_insert

I've used this system many times to avoid conflicts, but it may not work where 
the table needs to be accessed concurrently, as rows will sort of disappear 
temporarily (or at least change to an unusable state).


Well, that's exactly what transactions are there for.
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Marc L. Allen
I don't think it's a bug.  I don't believe there's any defined rule for how SQL 
should behave, is there?  The updates are done serially not atomically.  If the 
rows happen to be processed in reverse order, then no constraint is violated.  
In fact, if there was a way to define the order the update was performed, it 
might be useful as a tool to solve this kind of problem.  

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dominique Devienne
Sent: Friday, September 06, 2013 3:28 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] UPDATE question


By forcing the "physical order" of the rows to be reversed, the UPDATE succeeds.
I suspect this is just a bug, and Dr Hipp will fix it (and if not document it 
somehow).




This email and any attachments are only for use by the intended recipient(s) 
and may contain legally privileged, confidential, proprietary or otherwise 
private information. Any unauthorized use, reproduction, dissemination, 
distribution or other disclosure of the contents of this e-mail or its 
attachments is strictly prohibited. If you have received this email in error, 
please notify the sender immediately and delete the original.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite support not recognized on debian lighttpd

2013-09-06 Thread GoogleWell
Hi everyone,
Very grateful this list exists and hope that in a few months or years I
will be able to contribute! but for now I am someone with questions :(
because I am very very new at all this.

I installed sqlite on my debian lighttpd (sqeeze) server. It is needed for
an application (business directory) used on a Joomla installation. This
particular component (SOBIPro) keeps telling me that sqlite is not
installed on my server.
Any idea what I might have done wrong?

-- 
zappy @
Googlewell 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Marc L. Allen
No one commented on my second thread (written after I actually understood the 
problem!).

But, I proposed a two update sequence to do it.  

UPDATE table SET Sequence = -(Sequence + 1) WHERE Sequence >= seq_to_insert AND 
Name = name_to_insert

UPDATE table SET Sequence = -Sequence WHERE Sequence < 0  AND Name = 
name_to_insert

I've used this system many times to avoid conflicts, but it may not work where 
the table needs to be accessed concurrently, as rows will sort of disappear 
temporarily (or at least change to an unusable state).

-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dominique Devienne
Sent: Friday, September 06, 2013 3:28 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] UPDATE question

On Fri, Sep 6, 2013 at 6:21 AM, James K. Lowden wrote:

> Here's how Marc L. Allen's query should work (different DBMS):
>
> $ bsqldb  < /tmp/sql
>PKey  Name   Sequence
> ---  --  ---
>   1  Blue  1
>   2  Blue  2
>   3  Blue  4
>   4  Blue  5
>   5  Blue  6
>   6  Blue  3
> 6 rows affected
>
> Note that the final insert is assigned the next auto-generated PKey 
> (6), and the old 3 is now 4, etc.
>

FWIW, a repro, taking the unrelated PKey out of the picture:

C:\Users\DDevienne>sqlite3
SQLite version 3.7.15.2 2013-01-09 11:53:05 Enter ".help" for instructions 
Enter SQL statements terminated with a ";"
sqlite> create table jkl (name text, seq number, unique (name, seq)); 
sqlite> insert into jkl values ('blue', 1), ('blue', 2), ('blue', 3),
('blue', 4), ('blue', 5);
sqlite> select * from jkl;
blue|1
blue|2
blue|3
blue|4
blue|5
sqlite> update jkl set seq = seq + 1 where seq >= 3 and name = 'blue';
Error: columns name, seq are not unique
sqlite>

I tried to "ruse" and use

sqlite> update jkl set seq = seq + 1 where name = 'blue' and seq in 
sqlite> (select
seq from jkl where seq >= 3 order by seq desc);
Error: columns name, seq are not unique

but of course you cannot influence the processing order SQLite uses. OK, you 
can in a way, see below:

sqlite> create table jk2 as select * from jkl order by seq desc; select 
sqlite> * from jk2;
blue|5
blue|4
blue|3
blue|2
blue|1
sqlite> update jk2 set seq = seq + 1 where seq >= 3 and name = 'blue'; 
sqlite> select * from jk2;
blue|6
blue|5
blue|4
blue|2
blue|1
sqlite> insert into jk2 values ('blue', 3); select * from jk2;
blue|6
blue|5
blue|4
blue|2
blue|1
blue|3
sqlite>

By forcing the "physical order" of the rows to be reversed, the UPDATE succeeds.
I suspect this is just a bug, and Dr Hipp will fix it (and if not document it 
somehow).

For reference, the same in Oracle:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production 
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create table jkl
  2  ( name varchar2(64)
  3  , seq  number
  4  , unique (name, seq)
  5  );

Table created.

SQL> insert into jkl values ('blue', 1);

1 row created.

SQL> insert into jkl values ('blue', 2);

1 row created.

SQL> insert into jkl values ('blue', 3);

1 row created.

SQL> insert into jkl values ('blue', 4);

1 row created.

SQL> insert into jkl values ('blue', 5);

1 row created.

SQL> commit;

Commit complete.

SQL> column name format a16;
SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  3
blue  4
blue  5

SQL> update jkl set seq = seq + 1 where seq >= 3 and name = 'blue';

3 rows updated.

SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  4
blue  5
blue  6

SQL> insert into jkl values ('blue', 3);

1 row created.

SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  4
blue  5
blue  6
blue  3

6 rows selected.

SQL>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


This email and any attachments are only for use by the intended recipient(s) 
and may contain legally privileged, confidential, proprietary or otherwise 
private information. Any unauthorized use, reproduction, dissemination, 
distribution or other disclosure of the contents of this e-mail or its 
attachments is strictly prohibited. If you have receiv

[sqlite] UNSIGNED BIG INT in SQLite 3.7.11.

2013-09-06 Thread Filip Curcic
Hello,

I am using Android 4.2 with SQLite 3.7.11 database.
I am not being able to insert UNSIGNED BIG INT data properly. It looks to
me like an overflow, because when I read it out I get a negative number.

That only happens on this device, I have another 2.3 Andriod device which
runs older version of SQLite, and it does not happen.

Is this maybe a known problem?

Thank you

-- 
Filip Ćurčić
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 6:21 AM, James K. Lowden wrote:

> Here's how Marc L. Allen's query should work (different DBMS):
>
> $ bsqldb  < /tmp/sql
>PKey  Name   Sequence
> ---  --  ---
>   1  Blue  1
>   2  Blue  2
>   3  Blue  4
>   4  Blue  5
>   5  Blue  6
>   6  Blue  3
> 6 rows affected
>
> Note that the final insert is assigned the next auto-generated
> PKey (6), and the old 3 is now 4, etc.
>

FWIW, a repro, taking the unrelated PKey out of the picture:

C:\Users\DDevienne>sqlite3
SQLite version 3.7.15.2 2013-01-09 11:53:05
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table jkl (name text, seq number, unique (name, seq));
sqlite> insert into jkl values ('blue', 1), ('blue', 2), ('blue', 3),
('blue', 4), ('blue', 5);
sqlite> select * from jkl;
blue|1
blue|2
blue|3
blue|4
blue|5
sqlite> update jkl set seq = seq + 1 where seq >= 3 and name = 'blue';
Error: columns name, seq are not unique
sqlite>

I tried to "ruse" and use

sqlite> update jkl set seq = seq + 1 where name = 'blue' and seq in (select
seq from jkl where seq >= 3 order by seq desc);
Error: columns name, seq are not unique

but of course you cannot influence the processing order SQLite uses. OK,
you can in a way, see below:

sqlite> create table jk2 as select * from jkl order by seq desc;
sqlite> select * from jk2;
blue|5
blue|4
blue|3
blue|2
blue|1
sqlite> update jk2 set seq = seq + 1 where seq >= 3 and name = 'blue';
sqlite> select * from jk2;
blue|6
blue|5
blue|4
blue|2
blue|1
sqlite> insert into jk2 values ('blue', 3);
sqlite> select * from jk2;
blue|6
blue|5
blue|4
blue|2
blue|1
blue|3
sqlite>

By forcing the "physical order" of the rows to be reversed, the UPDATE
succeeds.
I suspect this is just a bug, and Dr Hipp will fix it (and if not document
it somehow).

For reference, the same in Oracle:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing
options

SQL> create table jkl
  2  ( name varchar2(64)
  3  , seq  number
  4  , unique (name, seq)
  5  );

Table created.

SQL> insert into jkl values ('blue', 1);

1 row created.

SQL> insert into jkl values ('blue', 2);

1 row created.

SQL> insert into jkl values ('blue', 3);

1 row created.

SQL> insert into jkl values ('blue', 4);

1 row created.

SQL> insert into jkl values ('blue', 5);

1 row created.

SQL> commit;

Commit complete.

SQL> column name format a16;
SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  3
blue  4
blue  5

SQL> update jkl set seq = seq + 1 where seq >= 3 and name = 'blue';

3 rows updated.

SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  4
blue  5
blue  6

SQL> insert into jkl values ('blue', 3);

1 row created.

SQL> select * from jkl;

NAMESEQ
 --
blue  1
blue  2
blue  4
blue  5
blue  6
blue  3

6 rows selected.

SQL>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPDATE question

2013-09-06 Thread Dominique Devienne
On Fri, Sep 6, 2013 at 6:21 AM, James K. Lowden wrote:

> That's perfectly good SQL.  SQLite is simply not executing the
> update atomically.
>
> Anyone tempted to protest may be forgetting "atomic" means more than
> "all or nothing". It also means the DBMS may execute the transaction
> however it sees fit, at the cost of ensuring that constraints remain in
> effect upon commit but *only* upon commit. ...


> Constraints hold for the whole database at all times whenever the user
> can see the data.  Any update is valid if it can logically be applied
> and leave the database in a state consistent with its declared
> constraints.
>

FWIW, in the circumstances of the thread below, SQLite does behave as you
described it should in this new UPDATE use-case, i.e. SQLite copes with
temporarily invalid states provided the final state is valid. So it's not
necessarily a philosophical design decision, but maybe an implementation
issue, which is perhaps fixable. My $0.02, before a more authoritative
answer.

http://sqlite.1065341.n5.nabble.com/Order-of-ON-DELETE-CASCADE-specified-in-SQLite-td67681.html
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users