Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread Keith Medcalf
On Tuesday, 30 May, 2017 10:33, R Smith  said:

> Keith, I think the OP meant he inserted the values using one single
> statement, not one single transaction, as in he did one ginormous INSERT
> INTO t(v1, v2...) VALUES (1, 2),(2,3),(..),,(1,
> 297829872); - 180MB or so worth... Probably lots of data in few rows,
> because he is not hitting any SQLite statement limits - unless those
> were disabled.
> 
> Meaning that it is probably just one VDBE program. That's unless I am
> reading wrong or assuming wrong from the original post - which is always
> possible.

You are correct.  I modified the dump so that it would execute one insert per 
table with all the data in the single insert statement.  It is far slower than 
even inserting each row in its own transaction (autocommit).  Of course, that 
means that some of the insert statements are approaching several hundred 
megabytes long.  It works, but it is very slow.  Most of the time is spent in 
the parser as one would expect.

> On 2017/05/30 6:19 PM, Keith Medcalf wrote:
> > I find quite the opposite.  Using a DUMP file to create a database where
> the first test uses the standard dump (which does the load in a single
> transaction) is enormously faster than the second one, where the BEGIN
> TRANSACTION and COMMIT have been commented out, and thus each insert is
> performed in its own transaction.  The structure does have the indexes
> created while loading ...
> >
> >> wc -l xmltv.sql
> >   5425040xmltv.sql
> >
> > 2017-05-30  09:43   446,700,424 xmltv.sql
> >
> > With the commands all being loaded in a single transaction:
> >
> >> timethis sqlite test1.db < xmltv.sql
> > TimeThis :  Command Line :  sqlite test1.db
> > TimeThis :Start Time :  Tue May 30 09:41:12 2017
> > TimeThis :  End Time :  Tue May 30 09:42:14 2017
> > TimeThis :  Elapsed Time :  00:01:02.005
> >
> >
> > With the commands being in their own individual autocommit transactions:
> >   --- still running after 10 minutes
> >   --- still running after 20 minutes
> >   --- gave up after 30 minutes (and it was only 1/100th through
> inserting all the data)
> >
> >
> > The fact of the matter is, that you are much better running larger
> transactions than smaller ones.  The hardware limits the transaction rate
> (max 60 transactions/second on spinning rust -- unless you have lying
> hardware that does not flush properly).  Apparently somewhat limited on a
> machine with a 5GB/s SSD as well...since the write rate was hugely less
> than the channel limit, but the CPU was fully consumed.
> >
> > And just to be clear, putting multiple statements in a transaction does
> not mean they are executed as a single VDBE program.  They are still
> executed one at a time.  The only difference is the number of trnsactions
> (and hence the commit/flush to disk count).
> >
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Does prepare do arithmetic?

2017-05-30 Thread Richard Hipp
On 5/30/17, James K. Lowden  wrote:
>
> I didn't know sqlite3_column_text converted the result.  I thought the
> result was constant, and the function returned a string
> representation of it.
>

It might be a good optimization to change that, so that
sqlite3_column_text does in fact keep the original value and just
return a temporary string that is freed at the next sqlite3_step() (or
sqlite3_reset() or sqlite3_finalize()).  That would mean that columns
that are constant do not need to be recomputed at each step.  It is
something to look into.  But I don't think it would make much
difference in most applications, so it is not a huge priority.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Does prepare do arithmetic?

2017-05-30 Thread James K. Lowden
On Mon, 29 May 2017 16:18:17 +0200
R Smith  wrote:

> Actually what Richard is saying is that you SHOULD do the arithmetic 
> yourself when you can, because you can boldly make assumptions about
> the code that you write, but he, or the SQLite engine to be precise,
> cannot be so bold, it may not be accurate in accepting that nothing
> about the database will change from one evaluation to the next, so it
> has to recompute - but you may optimise that computation out because
> you KNOW your system and you KNOW you won't be changing column
> collations or type affinities etc. in between steps.

I wouldn't characterize his answer that way.  He said that complex
partial constants are computed once, and that recomputing simple
constants wasn't measurably cheaper than storing them.  From what
they've measured, you're safe letting SQLite compute your constants.
Even if it's doing a little extra work, it won't be noticeable.  

What was a little more suprising was why the little extra work is
necessary.  

> calling sqlite3_column_text and thus causing the integer result to be
> converted into a string

I didn't know sqlite3_column_text converted the result.  I thought the
result was constant, and the function returned a string
representation of it.  

Richard is saying that the column-retrieval functions change the type
of the retrieved column for the current row.  Because of that, and
because that change-of-type action is under programmer control, SQLite
must recompute each row anew.  Even for "constant" columns.  

> 2*15-5+11/5<<3*1

In this case, two partials are stored, for efficiency, and the
final operation, shift, is recomputed each time.  Because the
programmer can't affect the types of the partials, they can safely be
memoized. Because the result of the final computation is a column --
whose type *can* be changed by the programmer -- it's recomputed for
each row.  

--jkl






___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Thomas Flemming

> Then what is "FROM Pois_bb, Pois WHERE...Pois_bb.Id = Pois.Id"?
> That's joining two tables together.

This is just because of the rtree, which is in Pois_bb 
(http://www.sqlite.org/rtree.html), has nothing to do with the second 
condition "styleid IN .."




Am 30.05.2017 um 18:29 schrieb David Raymond:

If you scroll down in my previous reply I put the explain query plan outputs in 
with the queries. Guess I should have mentioned that. (Re-copied them below)

It was using the index on StyleId, thinking that was going to be faster. What 
Dr Hipp suggested in adding the unary + operator does is turn that into an 
expression rather than the raw field in the eyes of the planner, so it doesn't 
use that index on StyleId.

"but the query is not in two tables.
its two select-where in the same table."
Then what is "FROM Pois_bb, Pois WHERE...Pois_bb.Id = Pois.Id"?
That's joining two tables together. The fields you wanted returned may only be 
from one of them, but it's still a join. The quick version(s) have Pois_bb as 
the outer loop and Pois as the inner loop. The slowed down version had Pois as 
the outer loop and Pois_bb as the inner loop.



SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id;

Run Time: real 0.109 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)


SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 9.422 user 5.132433 sys 4.212027
selectid|order|from|detail
0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1
0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:


Forcing the order with CROSS JOIN
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.078 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1


With the unary + operator
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND +styleid IN (9,48,73,200,142,31,219);
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread E.Pasma
Thomas Flemming Tue, 30 May 2017 09:43:15 -0700


>> Try putting a "+" symbol before "styleid".  Like this:
>>
>> AND +styleid IN (1351,1362,1371,1374,1376,1542,1595,1597,1643,1762)


> THATS IT !! :-)))
> 
> 50ms with +, and 15000ms without the +
>
> How is that possible?

Hello, best explained here:
http://sqlite.org/optoverview.html#uplus
(a most interesting document anyway)
Regards, E.  Pasma

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread David Raymond
If you scroll down in my previous reply I put the explain query plan outputs in 
with the queries. Guess I should have mentioned that. (Re-copied them below)

It was using the index on StyleId, thinking that was going to be faster. What 
Dr Hipp suggested in adding the unary + operator does is turn that into an 
expression rather than the raw field in the eyes of the planner, so it doesn't 
use that index on StyleId.

"but the query is not in two tables.
its two select-where in the same table."
Then what is "FROM Pois_bb, Pois WHERE...Pois_bb.Id = Pois.Id"?
That's joining two tables together. The fields you wanted returned may only be 
from one of them, but it's still a join. The quick version(s) have Pois_bb as 
the outer loop and Pois as the inner loop. The slowed down version had Pois as 
the outer loop and Pois_bb as the inner loop.



SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id;

Run Time: real 0.109 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)


SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id
AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 9.422 user 5.132433 sys 4.212027
selectid|order|from|detail
0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1
0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:


Forcing the order with CROSS JOIN
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id
AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.078 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1


With the unary + operator
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id
AND +styleid IN (9,48,73,200,142,31,219);
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Thomas Flemming

> Try putting a "+" symbol before "styleid".  Like this:
>
> AND +styleid IN (1351,1362,1371,1374,1376,1542,1595,1597,1643,1762)

THATS IT !! :-)))

50ms with +, and 15000ms without the +

How is that possible?



Am 30.05.2017 um 17:36 schrieb Richard Hipp:

On 5/27/17, Thomas Flemming  wrote:

Hi,

I have a table Pois with points of interest (geogr. coordinate, label,
styleid) where I do regional querys using a rtree-index:

SELECT Pois.* FROM Pois_bb, Pois WHERE  y0 < -14.8600 AND  y1 > -15.12862
AND   x0 < 30.46203 AND  x1 > 30.00074766
AND   18 BETWEEN z0 AND z1
AND   Pois_bb.Id = Pois.Id
Thats very fast, 50ms.

The problem is, when I add a second condition to get certain poi-types only
in the area:

AND styleid IN 1351,1362,1371,1374,1376,1542,1595,1597,1643,1762


Try putting a "+" symbol before "styleid".  Like this:

AND +styleid IN (1351,1362,1371,1374,1376,1542,1595,1597,1643,1762)




The query becomes really slow, 800ms.
There is of course also an index on styleid.

I also realized, just this query:

SELECT * FROM Pois WHERE styleid IN
1351,1362,1371,1374,1376,1542,1595,1597,1643,1762

is also slow for the first call. The second call is fast.

(Using SQLite Expert Professional 3.5 for testing).

Any ideas, how to speed this up?

Thanks
Tom


--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users






--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Richard Hipp
On 5/27/17, Thomas Flemming  wrote:
> Hi,
>
> I have a table Pois with points of interest (geogr. coordinate, label,
> styleid) where I do regional querys using a rtree-index:
>
> SELECT Pois.* FROM Pois_bb, Pois WHERE  y0 < -14.8600 AND  y1 > -15.12862
>AND   x0 < 30.46203 AND  x1 > 30.00074766
>AND   18 BETWEEN z0 AND z1
>AND   Pois_bb.Id = Pois.Id
> Thats very fast, 50ms.
>
> The problem is, when I add a second condition to get certain poi-types only
> in the area:
>
> AND styleid IN 1351,1362,1371,1374,1376,1542,1595,1597,1643,1762

Try putting a "+" symbol before "styleid".  Like this:

   AND +styleid IN (1351,1362,1371,1374,1376,1542,1595,1597,1643,1762)


>
> The query becomes really slow, 800ms.
> There is of course also an index on styleid.
>
> I also realized, just this query:
>
> SELECT * FROM Pois WHERE styleid IN
> 1351,1362,1371,1374,1376,1542,1595,1597,1643,1762
>
> is also slow for the first call. The second call is fast.
>
> (Using SQLite Expert Professional 3.5 for testing).
>
> Any ideas, how to speed this up?
>
> Thanks
> Tom
>
>
> --
> /
> **   Flemming Software Development CC
> **   Thomas Flemming
> **   PO Box 81244
> **   Windhoek, Namibia
> **   http://www.quovadis-gps.com
> **   mail  t...@qvgps.com
> **   +264 (0)81 3329923
> **   +49  (0)6182 8492599
> ***/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread R Smith
Keith, I think the OP meant he inserted the values using one single 
statement, not one single transaction, as in he did one ginormous INSERT 
INTO t(v1, v2...) VALUES (1, 2),(2,3),(..),,(1, 
297829872); - 180MB or so worth... Probably lots of data in few rows, 
because he is not hitting any SQLite statement limits - unless those 
were disabled.


Meaning that it is probably just one VDBE program. That's unless I am 
reading wrong or assuming wrong from the original post - which is always 
possible.



On 2017/05/30 6:19 PM, Keith Medcalf wrote:

I find quite the opposite.  Using a DUMP file to create a database where the 
first test uses the standard dump (which does the load in a single transaction) 
is enormously faster than the second one, where the BEGIN TRANSACTION and 
COMMIT have been commented out, and thus each insert is performed in its own 
transaction.  The structure does have the indexes created while loading ...


wc -l xmltv.sql

  5425040xmltv.sql

2017-05-30  09:43   446,700,424 xmltv.sql

With the commands all being loaded in a single transaction:


timethis sqlite test1.db < xmltv.sql

TimeThis :  Command Line :  sqlite test1.db
TimeThis :Start Time :  Tue May 30 09:41:12 2017
TimeThis :  End Time :  Tue May 30 09:42:14 2017
TimeThis :  Elapsed Time :  00:01:02.005


With the commands being in their own individual autocommit transactions:
  --- still running after 10 minutes
  --- still running after 20 minutes
  --- gave up after 30 minutes (and it was only 1/100th through inserting all 
the data)


The fact of the matter is, that you are much better running larger transactions 
than smaller ones.  The hardware limits the transaction rate (max 60 
transactions/second on spinning rust -- unless you have lying hardware that 
does not flush properly).  Apparently somewhat limited on a machine with a 
5GB/s SSD as well...since the write rate was hugely less than the channel 
limit, but the CPU was fully consumed.

And just to be clear, putting multiple statements in a transaction does not 
mean they are executed as a single VDBE program.  They are still executed one 
at a time.  The only difference is the number of trnsactions (and hence the 
commit/flush to disk count).



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread R Smith

On 2017/05/30 2:01 PM, Hick Gunter wrote:

If you stuff all 18MB of your data into a single INSERT, then SQlite will need 
to generate a single program that contains all 18MB of your data (plus code to 
build rows aout of that). This will put a heavy strain on memory requirements 
and offset any speed you hope to gain.

The SOP is to put many (1000 magnitude) INSERT statements into one transaction 
to save disk IO on commit.


Correct, and let me just add, the /compressed/ size is 18MB of fairly 
compressible statements, so the real data may well be 180MB or more. 
This can take quite some time to build a query on.


Out of interest Sarge, did you try this on MySQL or Postgres too? What 
was the result?



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Thomas Flemming

but the query is not in two tables.
its two select-where in the same table.

Am 30.05.2017 um 17:18 schrieb Hick Gunter:

Just like any other join, but with tables in the desired order and the word 
CROSS added

SELECT ... FROM  CROSS JOIN  ...

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Thomas Flemming
Gesendet: Dienstag, 30. Mai 2017 18:15
An: sqlite-users@mailinglists.sqlite.org
Betreff: Re: [sqlite] SELECT WHERE with RTREE and second condition slow


force it to go the way you want by using "cross join" to force the
ordering of


How would such "cross join" statemant look like?


Am 30.05.2017 um 16:38 schrieb David Raymond:

It looks like it thinks that using the index on StyleId is going to be
the most beneficial, as opposed to the rtree index. How it compares a
normal index's stat1 to a virtual table's stat1 I don't know. In this
case you can force it to go the way you want by using "cross join" to
force the ordering of

the join.



-Original Message-
From: sqlite-users
[mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of
Thomas Flemming
Sent: Monday, May 29, 2017 9:28 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition
slow

Ok, here is a sample to try these queries:

http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
(825mb, 12 mio records)

Before I change my app-logic to do the styleid-query on the app-side,
I would like to know, if there might be a chance to get this fast on the 
sqlite-side.


very fast, 77 records, 49ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
  AND x0 < 30.46203 AND  x1 > 30.00074766
  AND 18 BETWEEN z0 AND z1
  AND Pois_bb.Id = Pois.Id;

Run Time: real 0.109 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)


fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.094 user 0.093601 sys 0.00
selectid|order|from|detail
0|0|0|SEARCH TABLE Pois USING COVERING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1


very slow: 55 records, 3ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
  AND x0 < 30.46203 AND  x1 > 30.00074766
  AND 18 BETWEEN z0 AND z1
  AND Pois_bb.Id = Pois.Id
  AND styleid IN
(9,48,73,200,142,31,219);

Run Time: real 9.422 user 5.132433 sys 4.212027
selectid|order|from|detail
0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1
0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:


Forcing the order with CROSS JOIN
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
  AND x0 < 30.46203 AND  x1 > 30.00074766
  AND 18 BETWEEN z0 AND z1
  AND Pois_bb.Id = Pois.Id
  AND styleid IN
(9,48,73,200,142,31,219);

Run Time: real 0.078 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1


CREATE TABLE Pois(
Id LONG PRIMARY KEY,
Label VARCHAR(50),
Info TEXT,
Lat FLOAT,
Lon FLOAT,
Z FLOAT,
Flags INT,
StyleId INT
);
CREATE INDEX Pois_StyleId ON Pois(StyleId); CREATE VIRTUAL TABLE
Pois_bb USING rtree(
Id LONG PRIMARY KEY,
X0 FLOAT,
X1 FLOAT,
Y0 FLOAT,
Y1 FLOAT,
Z0 FLOAT,
Z1 FLOAT
);
CREATE TABLE IF NOT EXISTS "Lines_bb_node"(nodeno INTEGER PRIMARY KEY,
data BLOB); CREATE TABLE IF NOT EXISTS "Lines_bb_rowid"(rowid INTEGER
PRIMARY KEY, nodeno INTEGER); CREATE TABLE IF NOT EXISTS
"Lines_bb_parent"(nodeno INTEGER PRIMARY KEY, parentnode INTEGER);

sqlite_stat1
tbl|idx|stat
Pois|Pois_StyleId|11421177 16996
Pois|sqlite_autoindex_Pois_1|11421177 1
Pois_bb_rowid||11421177
Pois_bb_node||611106
Pois_bb_parent||611105
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing 

Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread Keith Medcalf

I find quite the opposite.  Using a DUMP file to create a database where the 
first test uses the standard dump (which does the load in a single transaction) 
is enormously faster than the second one, where the BEGIN TRANSACTION and 
COMMIT have been commented out, and thus each insert is performed in its own 
transaction.  The structure does have the indexes created while loading ...

>wc -l xmltv.sql
 5425040xmltv.sql

2017-05-30  09:43   446,700,424 xmltv.sql

With the commands all being loaded in a single transaction:

>timethis sqlite test1.db < xmltv.sql

TimeThis :  Command Line :  sqlite test1.db
TimeThis :Start Time :  Tue May 30 09:41:12 2017
TimeThis :  End Time :  Tue May 30 09:42:14 2017
TimeThis :  Elapsed Time :  00:01:02.005


With the commands being in their own individual autocommit transactions:
 --- still running after 10 minutes
 --- still running after 20 minutes
 --- gave up after 30 minutes (and it was only 1/100th through inserting all 
the data)


The fact of the matter is, that you are much better running larger transactions 
than smaller ones.  The hardware limits the transaction rate (max 60 
transactions/second on spinning rust -- unless you have lying hardware that 
does not flush properly).  Apparently somewhat limited on a machine with a 
5GB/s SSD as well...since the write rate was hugely less than the channel 
limit, but the CPU was fully consumed.

And just to be clear, putting multiple statements in a transaction does not 
mean they are executed as a single VDBE program.  They are still executed one 
at a time.  The only difference is the number of trnsactions (and hence the 
commit/flush to disk count).

-- 
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı

> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Sarge Borsch
> Sent: Sunday, 28 May, 2017 04:58
> To: sqlite-users@mailinglists.sqlite.org
> Subject: [sqlite] Importing from single-insert-statement SQL dump is 61
> times slower than importing from SQL dump with one statement per row
> 
> I compared speed of importing (into an empty SQLite DB) from 2 kinds of
> SQL dumps. Data is exactly the same in both cases, and xz-compressed size
> of SQL dump is near 18MB in both cases.
> First SQL dump has single big INSERT statement in single transaction.
> Second SQL dump has one INSERT statement for each row.
> 
> Schema is 1 table with these columns: INTEGER PRIMARY KEY, TEXT x2 (2
> columns), INTEGER x4, TEXT x10
> There is nothing else besides the data, no indexes, etc. in both cases.
> 
> In both cases I am importing from compressed file, using command like
> this:
> time xzdec something.sql.xz | sqlite3 something.db
> 
> Time of import from single-insert SQL dump:
> real  2m13.884s
> user  2m13.791s
> sys   0m1.052s
> 
> Time of import from multiple-insert SQL dump:
> real  0m2.192s
> user  0m3.266s
> sys   0m0.347s
> 
> IMO sqlite needs some optimisation for this case when there’s a huge
> INSERT statement, because the speed difference is enormous.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Hick Gunter
Just like any other join, but with tables in the desired order and the word 
CROSS added

SELECT ... FROM  CROSS JOIN  ...

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Thomas Flemming
Gesendet: Dienstag, 30. Mai 2017 18:15
An: sqlite-users@mailinglists.sqlite.org
Betreff: Re: [sqlite] SELECT WHERE with RTREE and second condition slow

> force it to go the way you want by using "cross join" to force the
> ordering of

How would such "cross join" statemant look like?


Am 30.05.2017 um 16:38 schrieb David Raymond:
> It looks like it thinks that using the index on StyleId is going to be
> the most beneficial, as opposed to the rtree index. How it compares a
> normal index's stat1 to a virtual table's stat1 I don't know. In this
> case you can force it to go the way you want by using "cross join" to
> force the ordering of
the join.
>
>
> -Original Message-
> From: sqlite-users
> [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On Behalf Of
> Thomas Flemming
> Sent: Monday, May 29, 2017 9:28 AM
> To: sqlite-users@mailinglists.sqlite.org
> Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition
> slow
>
> Ok, here is a sample to try these queries:
>
> http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
> (825mb, 12 mio records)
>
> Before I change my app-logic to do the styleid-query on the app-side,
> I would like to know, if there might be a chance to get this fast on the 
> sqlite-side.
>
>
> very fast, 77 records, 49ms:
> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
>  AND x0 < 30.46203 AND  x1 > 30.00074766
>  AND 18 BETWEEN z0 AND z1
>  AND Pois_bb.Id = Pois.Id;
>
> Run Time: real 0.109 user 0.00 sys 0.00
> selectid|order|from|detail
> 0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
> 0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
>
>
> fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
> SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);
>
> Run Time: real 0.094 user 0.093601 sys 0.00
> selectid|order|from|detail
> 0|0|0|SEARCH TABLE Pois USING COVERING INDEX Pois_StyleId (StyleId=?)
> 0|0|0|EXECUTE LIST SUBQUERY 1
>
>
> very slow: 55 records, 3ms:
> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
>  AND x0 < 30.46203 AND  x1 > 30.00074766
>  AND 18 BETWEEN z0 AND z1
>  AND Pois_bb.Id = Pois.Id
>  AND styleid IN
> (9,48,73,200,142,31,219);
>
> Run Time: real 9.422 user 5.132433 sys 4.212027
> selectid|order|from|detail
> 0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
> 0|0|0|EXECUTE LIST SUBQUERY 1
> 0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:
>
>
> Forcing the order with CROSS JOIN
> SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
> -15.12862
>  AND x0 < 30.46203 AND  x1 > 30.00074766
>  AND 18 BETWEEN z0 AND z1
>  AND Pois_bb.Id = Pois.Id
>  AND styleid IN
> (9,48,73,200,142,31,219);
>
> Run Time: real 0.078 user 0.00 sys 0.00
> selectid|order|from|detail
> 0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
> 0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
> 0|0|0|EXECUTE LIST SUBQUERY 1
>
>
> CREATE TABLE Pois(
>Id LONG PRIMARY KEY,
>Label VARCHAR(50),
>Info TEXT,
>Lat FLOAT,
>Lon FLOAT,
>Z FLOAT,
>Flags INT,
>StyleId INT
> );
> CREATE INDEX Pois_StyleId ON Pois(StyleId); CREATE VIRTUAL TABLE
> Pois_bb USING rtree(
>Id LONG PRIMARY KEY,
>X0 FLOAT,
>X1 FLOAT,
>Y0 FLOAT,
>Y1 FLOAT,
>Z0 FLOAT,
>Z1 FLOAT
> );
> CREATE TABLE IF NOT EXISTS "Lines_bb_node"(nodeno INTEGER PRIMARY KEY,
> data BLOB); CREATE TABLE IF NOT EXISTS "Lines_bb_rowid"(rowid INTEGER
> PRIMARY KEY, nodeno INTEGER); CREATE TABLE IF NOT EXISTS
> "Lines_bb_parent"(nodeno INTEGER PRIMARY KEY, parentnode INTEGER);
>
> sqlite_stat1
> tbl|idx|stat
> Pois|Pois_StyleId|11421177 16996
> Pois|sqlite_autoindex_Pois_1|11421177 1
> Pois_bb_rowid||11421177
> Pois_bb_node||611106
> Pois_bb_parent||611105
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/

Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Thomas Flemming

force it to go the way you want by using "cross join" to force the ordering of


How would such "cross join" statemant look like?


Am 30.05.2017 um 16:38 schrieb David Raymond:

It looks like it thinks that using the index on StyleId is going to be the most 
beneficial, as opposed to the rtree index. How it compares a normal index's stat1 to a 
virtual table's stat1 I don't know. In this case you can force it to go the way you want 
by using "cross join" to force the ordering of

the join.



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Thomas Flemming
Sent: Monday, May 29, 2017 9:28 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition slow

Ok, here is a sample to try these queries:

http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
(825mb, 12 mio records)

Before I change my app-logic to do the styleid-query on the app-side, I would
like to know, if there might be a chance to get this fast on the sqlite-side.


very fast, 77 records, 49ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id;

Run Time: real 0.109 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)


fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.094 user 0.093601 sys 0.00
selectid|order|from|detail
0|0|0|SEARCH TABLE Pois USING COVERING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1


very slow: 55 records, 3ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 9.422 user 5.132433 sys 4.212027
selectid|order|from|detail
0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1
0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:


Forcing the order with CROSS JOIN
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.078 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1


CREATE TABLE Pois(
   Id LONG PRIMARY KEY,
   Label VARCHAR(50),
   Info TEXT,
   Lat FLOAT,
   Lon FLOAT,
   Z FLOAT,
   Flags INT,
   StyleId INT
);
CREATE INDEX Pois_StyleId ON Pois(StyleId);
CREATE VIRTUAL TABLE Pois_bb USING rtree(
   Id LONG PRIMARY KEY,
   X0 FLOAT,
   X1 FLOAT,
   Y0 FLOAT,
   Y1 FLOAT,
   Z0 FLOAT,
   Z1 FLOAT
);
CREATE TABLE IF NOT EXISTS "Lines_bb_node"(nodeno INTEGER PRIMARY KEY, data 
BLOB);
CREATE TABLE IF NOT EXISTS "Lines_bb_rowid"(rowid INTEGER PRIMARY KEY, nodeno 
INTEGER);
CREATE TABLE IF NOT EXISTS "Lines_bb_parent"(nodeno INTEGER PRIMARY KEY, 
parentnode INTEGER);

sqlite_stat1
tbl|idx|stat
Pois|Pois_StyleId|11421177 16996
Pois|sqlite_autoindex_Pois_1|11421177 1
Pois_bb_rowid||11421177
Pois_bb_node||611106
Pois_bb_parent||611105
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread David Raymond
It looks like it thinks that using the index on StyleId is going to be the most 
beneficial, as opposed to the rtree index. How it compares a normal index's 
stat1 to a virtual table's stat1 I don't know. In this case you can force it to 
go the way you want by using "cross join" to force the ordering of the join.


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Thomas Flemming
Sent: Monday, May 29, 2017 9:28 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition slow

Ok, here is a sample to try these queries:

http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
(825mb, 12 mio records)

Before I change my app-logic to do the styleid-query on the app-side, I would 
like to know, if there might be a chance to get this fast on the sqlite-side.


very fast, 77 records, 49ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id;

Run Time: real 0.109 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)


fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.094 user 0.093601 sys 0.00
selectid|order|from|detail
0|0|0|SEARCH TABLE Pois USING COVERING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1


very slow: 55 records, 3ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id
AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 9.422 user 5.132433 sys 4.212027
selectid|order|from|detail
0|0|1|SEARCH TABLE Pois USING INDEX Pois_StyleId (StyleId=?)
0|0|0|EXECUTE LIST SUBQUERY 1
0|1|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 1:


Forcing the order with CROSS JOIN
SELECT Pois.* FROM Pois_bb CROSS JOIN Pois WHERE y0 < -14.8600 AND  y1 > 
-15.12862
AND x0 < 30.46203 AND  x1 > 30.00074766
AND 18 BETWEEN z0 AND z1
AND Pois_bb.Id = Pois.Id
AND styleid IN (9,48,73,200,142,31,219);

Run Time: real 0.078 user 0.00 sys 0.00
selectid|order|from|detail
0|0|0|SCAN TABLE Pois_bb VIRTUAL TABLE INDEX 2:C2E3C0E1B4D5
0|1|1|SEARCH TABLE Pois USING INDEX sqlite_autoindex_Pois_1 (Id=?)
0|0|0|EXECUTE LIST SUBQUERY 1


CREATE TABLE Pois(
  Id LONG PRIMARY KEY,
  Label VARCHAR(50),
  Info TEXT,
  Lat FLOAT,
  Lon FLOAT,
  Z FLOAT,
  Flags INT,
  StyleId INT
);
CREATE INDEX Pois_StyleId ON Pois(StyleId);
CREATE VIRTUAL TABLE Pois_bb USING rtree(
  Id LONG PRIMARY KEY,
  X0 FLOAT,
  X1 FLOAT,
  Y0 FLOAT,
  Y1 FLOAT,
  Z0 FLOAT,
  Z1 FLOAT
);
CREATE TABLE IF NOT EXISTS "Lines_bb_node"(nodeno INTEGER PRIMARY KEY, data 
BLOB);
CREATE TABLE IF NOT EXISTS "Lines_bb_rowid"(rowid INTEGER PRIMARY KEY, nodeno 
INTEGER);
CREATE TABLE IF NOT EXISTS "Lines_bb_parent"(nodeno INTEGER PRIMARY KEY, 
parentnode INTEGER);

sqlite_stat1
tbl|idx|stat
Pois|Pois_StyleId|11421177 16996
Pois|sqlite_autoindex_Pois_1|11421177 1
Pois_bb_rowid||11421177
Pois_bb_node||611106
Pois_bb_parent||611105
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread J. King
On May 30, 2017 10:07:45 AM EDT, Thomas Flemming  wrote:

>Style.Id doesn't need to be LONG, you're right. I changed it but it
>doesn't 
>make a difference.
>Pois.Id need to be LONG because the source for this column is really 
>containing 64-bit values 
Integers in SQLite are of variable size; if an 8-byte size is required, it will 
scale accordingly. Specifying LONG will not yield an integer affinity like 
specifying INTEGER would, and for primary keys is not as efficient because the 
column is not an alias for rowid if you specify LONG. 

In short, you should always use INTEGER PRIMARY KEY if the column is whole 
numbers, regardless of size. 
-- 
J. King
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Joseph L. Casale
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Thomas Flemming
Sent: Tuesday, May 30, 2017 8:08 AM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] SELECT WHERE with RTREE and second condition slow

>  > Do you know which SQLite version is being used by SQLite Expert
>  > Professional 3.5?
> sqlite 3.10.0
> 
> I tried SQLite Expert Professional 4, using sqlite 3.18.0, but its the same 
> slow.

On that matter, if you place another SQLite dll in the installation folder with 
a
new name, it becomes available in Tools->Options->SQLite Library as an alternate
choice over the shipped version.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Hick Gunter
INTEGER and LONG can both store 64 bits in SQlite. The difference is that 
"INTEGER PRIMARY KEY" makes the column an alias for the rowid, whereas "LONG 
PRIMARY KEY" defines a second, possibly redundant index.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Thomas Flemming
Gesendet: Dienstag, 30. Mai 2017 16:08
An: sqlite-users@mailinglists.sqlite.org
Betreff: Re: [sqlite] SELECT WHERE with RTREE and second condition slow

 > Do you know which SQLite version is being used by SQLite Expert  > 
 > Professional 3.5?
sqlite 3.10.0

I tried SQLite Expert Professional 4, using sqlite 3.18.0, but its the same 
slow.

Style.Id doesn't need to be LONG, you're right. I changed it but it doesn't 
make a difference.
Pois.Id need to be LONG because the source for this column is really containing 
64-bit values (osm-ids).

Tom


Am 30.05.2017 um 13:29 schrieb Wolfgang Enzinger:
> Am Mon, 29 May 2017 14:27:56 +0100 schrieb Thomas Flemming:
>
>> Ok, here is a sample to try these queries:
>>
>> http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
>> (825mb, 12 mio records)
>
> Just a few quick observations ...
>
> First, I would replace all column declarations like
>
> LONG PRIMARY KEY
>
> to
>
> INTEGER PRIMARY KEY
>
> This can make a huge difference AFAIK.
>
>> Before I change my app-logic to do the styleid-query on the app-side,
>> I would like to know, if there might be a chance to get this fast on the 
>> sqlite-side.
>>
>>
>> very fast, 77 records, 49ms:
>> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
>>  AND x0 < 30.46203 AND  x1 > 30.00074766
>>  AND 18 BETWEEN z0 AND z1
>>  AND Pois_bb.Id = Pois.Id;
>>
>>
>> fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
>> SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);
>>
>>
>> very slow: 55 records, 3ms:
>> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
>>  AND x0 < 30.46203 AND  x1 > 30.00074766
>>  AND 18 BETWEEN z0 AND z1
>>  AND Pois_bb.Id = Pois.Id
>>  AND styleid IN
>> (9,48,73,200,142,31,219);
>
> Do you know which SQLite version is being used by SQLite Expert
> Professional 3.5? Just wondering because my quick tests on this reveal
> different query plans, depending on the presence of a sqlite_stat4
> table (absent in your database) and the SQLite version (I ran some
> quick tests using SQLite 3.13 and 3.18). I'll have to dig deeper into
> this in the next days though.
>
> Wolfgang
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Thomas Flemming

> Do you know which SQLite version is being used by SQLite Expert
> Professional 3.5?
sqlite 3.10.0

I tried SQLite Expert Professional 4, using sqlite 3.18.0, but its the same 
slow.

Style.Id doesn't need to be LONG, you're right. I changed it but it doesn't 
make a difference.
Pois.Id need to be LONG because the source for this column is really 
containing 64-bit values (osm-ids).


Tom


Am 30.05.2017 um 13:29 schrieb Wolfgang Enzinger:

Am Mon, 29 May 2017 14:27:56 +0100 schrieb Thomas Flemming:


Ok, here is a sample to try these queries:

http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
(825mb, 12 mio records)


Just a few quick observations ...

First, I would replace all column declarations like

LONG PRIMARY KEY

to

INTEGER PRIMARY KEY

This can make a huge difference AFAIK.


Before I change my app-logic to do the styleid-query on the app-side, I would
like to know, if there might be a chance to get this fast on the sqlite-side.


very fast, 77 records, 49ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id;


fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);


very slow: 55 records, 3ms:
SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
 AND x0 < 30.46203 AND  x1 > 30.00074766
 AND 18 BETWEEN z0 AND z1
 AND Pois_bb.Id = Pois.Id
 AND styleid IN (9,48,73,200,142,31,219);


Do you know which SQLite version is being used by SQLite Expert
Professional 3.5? Just wondering because my quick tests on this reveal
different query plans, depending on the presence of a sqlite_stat4 table
(absent in your database) and the SQLite version (I ran some quick tests
using SQLite 3.13 and 3.18). I'll have to dig deeper into this in the next
days though.

Wolfgang

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



--
/
**   Flemming Software Development CC
**   Thomas Flemming
**   PO Box 81244
**   Windhoek, Namibia
**   http://www.quovadis-gps.com
**   mail  t...@qvgps.com
**   +264 (0)81 3329923
**   +49  (0)6182 8492599
***/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT WHERE with RTREE and second condition slow

2017-05-30 Thread Wolfgang Enzinger
Am Mon, 29 May 2017 14:27:56 +0100 schrieb Thomas Flemming:

> Ok, here is a sample to try these queries:
> 
> http://files.qvgps.com/0-tom-public/Geonames_World_2017.zip
> (825mb, 12 mio records)

Just a few quick observations ...

First, I would replace all column declarations like

LONG PRIMARY KEY

to

INTEGER PRIMARY KEY

This can make a huge difference AFAIK.

> Before I change my app-logic to do the styleid-query on the app-side, I would 
> like to know, if there might be a chance to get this fast on the sqlite-side.
> 
> 
> very fast, 77 records, 49ms:
> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
> AND x0 < 30.46203 AND  x1 > 30.00074766
> AND 18 BETWEEN z0 AND z1
> AND Pois_bb.Id = Pois.Id;
> 
> 
> fast, 1.5 mio records, 600ms, (sometimes very fast 60ms??):
> SELECT styleid FROM Pois WHERE styleid IN (9,48,73,200,142,31,219);
> 
> 
> very slow: 55 records, 3ms:
> SELECT Pois.* FROM Pois_bb, Pois WHERE y0 < -14.8600 AND  y1 > -15.12862
> AND x0 < 30.46203 AND  x1 > 30.00074766
> AND 18 BETWEEN z0 AND z1
> AND Pois_bb.Id = Pois.Id
> AND styleid IN (9,48,73,200,142,31,219);

Do you know which SQLite version is being used by SQLite Expert
Professional 3.5? Just wondering because my quick tests on this reveal
different query plans, depending on the presence of a sqlite_stat4 table
(absent in your database) and the SQLite version (I ran some quick tests
using SQLite 3.13 and 3.18). I'll have to dig deeper into this in the next
days though.

Wolfgang

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread Hick Gunter
If you stuff all 18MB of your data into a single INSERT, then SQlite will need 
to generate a single program that contains all 18MB of your data (plus code to 
build rows aout of that). This will put a heavy strain on memory requirements 
and offset any speed you hope to gain.

The SOP is to put many (1000 magnitude) INSERT statements into one transaction 
to save disk IO on commit.

-Ursprüngliche Nachricht-
Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] Im 
Auftrag von Sarge Borsch
Gesendet: Sonntag, 28. Mai 2017 12:58
An: sqlite-users@mailinglists.sqlite.org
Betreff: [sqlite] Importing from single-insert-statement SQL dump is 61 times 
slower than importing from SQL dump with one statement per row

I compared speed of importing (into an empty SQLite DB) from 2 kinds of SQL 
dumps. Data is exactly the same in both cases, and xz-compressed size of SQL 
dump is near 18MB in both cases.
First SQL dump has single big INSERT statement in single transaction.
Second SQL dump has one INSERT statement for each row.

Schema is 1 table with these columns: INTEGER PRIMARY KEY, TEXT x2 (2 columns), 
INTEGER x4, TEXT x10 There is nothing else besides the data, no indexes, etc. 
in both cases.

In both cases I am importing from compressed file, using command like this:
time xzdec something.sql.xz | sqlite3 something.db

Time of import from single-insert SQL dump:
real2m13.884s
user2m13.791s
sys 0m1.052s

Time of import from multiple-insert SQL dump:
real0m2.192s
user0m3.266s
sys 0m0.347s

IMO sqlite needs some optimisation for this case when there’s a huge INSERT 
statement, because the speed difference is enormous.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite Linux support

2017-05-30 Thread Richard Hipp
On 5/29/17, Radhakant Kumar  wrote:
> if I get same version on Linux
> as Precompiled binaries it might help me.

That is what the "Precompiled Binaries For Linux" on the
https://sqlite.org/download.html page is for.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 24hrs Formate Issue

2017-05-30 Thread Richard Hipp
On 5/30/17, rajesh mekala  wrote:
>
> Hi all,
>
>
>  This is Rajesh. I am using SQLITE db. Actually in my database i have
> one column.i want to convert my column value am/pm to 24hrs formate.
>
>
> Ex:
>
> Step 1:  I have one column.
>Column name: Ts
>  Column Have this value :2016-09-19 08:56:00 PM.
>
> My Query: strftime('%m/%d/%Y',trim(Ts,'AM/PM') ) as T.
>
> when i am running this query i get this output
>
> Output: 08:56:00.
>
> Step 2:
>
> i don't want output like  08:56:00.because end user don't know whether
> 08:56:00 Am or Pm.
>

Perhaps convert your data to 24-hour format like this:

UPDATE yourtable
  SET Ts = CASE WHEN Ts LIKE '%PM'
  THEN datetime(trim(Ts,'PM '), '+12 hours')
  ELSE datetime(trim(Ts,'AM ') END;

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite Linux support

2017-05-30 Thread Clemens Ladisch
Radhakant Kumar wrote:
> On windows i got 3.18.0 and on Linux 3.6.20.

Which Linux distribution are you using?


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread Clemens Ladisch
Sarge Borsch wrote:
> time xzdec something.sql.xz | sqlite3 something.db

This measures only xzdec; it does not catch anything that sqlite3 does
after xzdec has finished and closed the pipe.

> IMO sqlite needs some optimisation for this case when there’s a huge
> INSERT statement, because the speed difference is enormous.

The huge statement must be completely parsed, and compiled into a huge
VDBE program.  I don't see how SQLite could do anything different.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 24hrs Formate Issue

2017-05-30 Thread rajesh mekala

Hi all,


 This is Rajesh. I am using SQLITE db. Actually in my database i have one 
column.i want to convert my column value am/pm to 24hrs formate.


Ex:

Step 1:  I have one column.
   Column name: Ts
 Column Have this value :2016-09-19 08:56:00 PM.

My Query: strftime('%m/%d/%Y',trim(Ts,'AM/PM') ) as T.

when i am running this query i get this output

Output: 08:56:00.

Step 2:

i don't want output like  08:56:00.because end user don't know whether  
08:56:00 Am or Pm.

so

i want like this : 20:56:00 (i want ony time in 24hrs formate)

Step 3:
its possible or Not.If possible how its possible.


Step 4:
 is there any way to get only time from my column.

Column name: Ts
 Column Have this value 
:2016-09-19 08:56:00 PM.

  Please send Reply. its priority issue.

Thanks & Regards
M.Rajesh

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite Linux support

2017-05-30 Thread Radhakant Kumar
Hi ,

I am creating an application for windows and Linux . For windows system i
have downloaded Precompiled binaries for command line shell and its serving
my purpose.
same i have done for Linux but it was not able to handle the Data
validation(Joins and group) on same amount of data as in windows. Linux
version is older than windows
On windows i got 3.18.0 and on Linux 3.6.20. if I get same version on Linux
as Precompiled binaries it might help me.Please help me to resolve my issue.

Thanks,
Radhakant
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Importing from single-insert-statement SQL dump is 61 times slower than importing from SQL dump with one statement per row

2017-05-30 Thread Sarge Borsch
I compared speed of importing (into an empty SQLite DB) from 2 kinds of SQL 
dumps. Data is exactly the same in both cases, and xz-compressed size of SQL 
dump is near 18MB in both cases.
First SQL dump has single big INSERT statement in single transaction.
Second SQL dump has one INSERT statement for each row.

Schema is 1 table with these columns: INTEGER PRIMARY KEY, TEXT x2 (2 columns), 
INTEGER x4, TEXT x10
There is nothing else besides the data, no indexes, etc. in both cases.

In both cases I am importing from compressed file, using command like this:
time xzdec something.sql.xz | sqlite3 something.db

Time of import from single-insert SQL dump:
real2m13.884s
user2m13.791s
sys 0m1.052s

Time of import from multiple-insert SQL dump:
real0m2.192s
user0m3.266s
sys 0m0.347s

IMO sqlite needs some optimisation for this case when there’s a huge INSERT 
statement, because the speed difference is enormous.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users