Re: [sqlite] blob + rowID Insert question

2014-08-28 Thread Carlos Ferreira
Thanks!!


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
Sent: sexta-feira, 29 de Agosto de 2014 00:20
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] blob + rowID Insert question

On 8/28/2014 2:18 PM, Carlos Ferreira wrote:
> In the next line:
>
> UPDATE myTable SET theBlob =  WHERE id = 
>
> The "whatever" is a long string containing the data?

The "whatever" should be ? (question mark) - a parameter placeholder
( could be another one). Prepare the statement, bind the data to the
parameter using sqlite3_bind_blob (which you have already discovered), then
execute the statement.
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob + rowID Insert question

2014-08-28 Thread Carlos Ferreira
One other question:

I use the

SQLite3_BindBlob  to add a new blob to a given record. From this point on I
know how to change the BLOB ( but not its size ) using the incremental BLBO
direct Access functions.

Using your suggestion of the UPDATE Statement, and assuming I have a block
of memory with pointer p and size s ( different from the previous size ),
that I want to use as the data of my new blob, do you suggest I create a
very long string as the statement?

In the next line:

UPDATE myTable SET theBlob =  WHERE id = 

The "whatever" is a long string containing the data? How should I serialize
it? What kind of encoding?

Thanks

Carlos


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: quinta-feira, 28 de Agosto de 2014 19:08
To: General Discussion of SQLite Database
Subject: Re: [sqlite] blob + rowID Insert question


On 28 Aug 2014, at 7:06pm, Carlos Ferreira <car...@csiberkeley.com> wrote:

> Is there any way to replace a BLOB for a give ROW ID?

Use the UPDATE command:

UPDATE myTable SET theBlob =  WHERE id = 

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob + rowID Insert question

2014-08-28 Thread Carlos Ferreira
Thanks :) :)


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: quinta-feira, 28 de Agosto de 2014 19:08
To: General Discussion of SQLite Database
Subject: Re: [sqlite] blob + rowID Insert question


On 28 Aug 2014, at 7:06pm, Carlos Ferreira <car...@csiberkeley.com> wrote:

> Is there any way to replace a BLOB for a give ROW ID?

Use the UPDATE command:

UPDATE myTable SET theBlob =  WHERE id = 

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] blob + rowID Insert question

2014-08-28 Thread Carlos Ferreira
Hi All,

 

 

I have a table with one unique column of the type BLOB.

 

Because I am using the direct SQLite functions to read and write BLOBS, the
access is made referencing the ROW IDs.

 

Is there any way to replace a BLOB for a give ROW ID? If I perform DELETE
and INSERT I am not sure the ROW IDS will keep sequential and I cannot call
VACCUM for performance reasons in the application.

 

Thanks

 

Carlos

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite RTree nearest neighbour

2014-08-21 Thread Carlos Ferreira
I am not an expert in Sqlite R-tree, but it seems that if you want to solve
a nearest neighbor you need not only to search the objects in the leaf
containing the object you testing, but also some adjacent leaves around.

Another option would be to search for objects inside a  centered box or
sphere over the object, starting with a small box containing anything else
other than the object and start increasing the size until you get one or
more objects inside...From there you would just have to select the closest
one...

Probably there is some much easier way and I am not familiar with it...


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: quinta-feira, 21 de Agosto de 2014 23:09
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite RTree nearest neighbour


On 21 Aug 2014, at 10:32pm, skywind mailing lists 
wrote:

> does anybody have any experience with implementing a nearest neighbor
search using SQLite's RTree functionality? Is a nearest neighbor search
possible?

How much have you read ?  Are you familiar with SpaciaLite ?  Have you tried
implementing Nearest Neighbour without RTree ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Carlos Ferreira
Hi,

This is on a standard windows 7 64 bits laptop.

I am not using a shell command. I am using the win32/win64 precompiled Dlls.

I will check the pragma integrity check...

One thing I noticed before was that the maximum size for a blob inside a
record field is much smaller than what I imagined.

If I use blobs with more than 200 Mb, things go wrong when trying to read it
back using the direct blob functions.

I am going to check carefully if there is any other limit I am hitting
because in this case the only difference between the database that fails and
the database that works is that the working database has much smaller
blobs...and proximately the same number of records.
 
Regards.

Carlos


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: sexta-feira, 18 de Julho de 2014 16:34
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Vacuum command fails


On 18 Jul 2014, at 4:11pm, Carlos Ferreira <car...@csiberkeley.com> wrote:

> I also have a problem with VACCUM. ( both 32 and 64 bits )
> 
> Whenever I deal with small databases, it works fine, but as soon as 
> the DB is more than 200 Mb, the vaccum command bails out with code 21.
>   or 

Is this on a standard type of computer, or in a small device ?

If you use the VACUUM command from the SQLite shell tool (copy the database
onto a standard computer if necessary), rather than your own application, do
you get the same result ?

Does your database show any corruption when you use "PRAGMA
integrity_check;" ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum command fails

2014-07-18 Thread Carlos Ferreira
Hi,

I also have a problem with VACCUM. ( both 32 and 64 bits )

Whenever I deal with small databases, it works fine, but as soon as the DB
is more than 200 Mb, the vaccum command bails out with code 21.

  or 

Any idea ?

I can make my own copy and rename ..that is probably what Vaccum does...but
using vaccum would be very nice if it would work ok...

Thanks

Carlos



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of RSmith
Sent: sexta-feira, 18 de Julho de 2014 15:41
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Vacuum command fails


On 2014/07/18 16:30, Nelson, Erik - 2 wrote:
> veeresh kumar wrote:
>
>   
>> Now I am running the the application on D: Drive (it has 841 GB free
>>   space) and C: drive has space (333 GB free). It looks like VACUUM 
>> uses C:drive space ??? Also command seems to be working fine...
>>
> Maybe I'm missing something, but I felt like this was pretty clearly
answered before.
>
> PRAGMA temp_store_directory returns and controls the directory that is
used for temp files.  You can query it, and it will  probably resolve to
your C: drive, which is probably where your TEMP or TMP environment
variables point.
>
> If your temp store directory doesn't have enough space, the VACUUM will
fail.

And to add, you can change this by changing the TEMP or TMP directive in
your OS, it is not up to SQLite to decide where your OS intends temp files
to be, but it is changable.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Update a BLOB

2014-06-23 Thread Carlos Ferreira
Hello,

One question.


To access Blobs I use the direct access BLOB api from SQLite.

However, what is the best way to update a record that contains a BLOB, and the 
only goals is to update the BLOB...

The BLOB may shrink or increase...and that is the reason why I cannot use the 
BLOB Write and...


It does not seem reasonable to create a Statement with the BLOB string ...

So what is the best way to update a record with a BLOB?

Thanks

Carlos
 


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Carlos Ferreira
Sent: segunda-feira, 23 de Junho de 2014 15:52
To: 'General Discussion of SQLite Database'
Subject: [sqlite] SQLite and BLOBs

Hello,

I found this reference in this link:

http://stackoverflow.com/questions/676524/how-to-update-piecewise-a-blob-in-sqlite


"
This is not directly an answer to your question, but I have some experience 
using random access for (big) blobs in SQLite, and I advise you against using 
it, if you can. Here's why:

Blobs break the SQL query format entirely. If your blob data needs any kind of 
processing, it will certainly at some point need filtering. Whatever mechanism 
you have in place to deal with filtering in SQL will be useless in this regard.

Dealing with binary blobs wrapped in databases opposed to binary data in raw 
files limits your options. You will not be able to randomly read and write to 
data at the same time from multiple processes, which is possible with files. 
You can't use any tool that deals with this data and provides only a file I/O 
interface. You can't truncate or resize the blob. Files are simply a lot more 
versatile.

It may seem convenient to have everything contained within one file, as it 
simplifies backup and transfer, but the pain of working with blobs is simply 
not worth it.

So as your attorney, I advise you to write your blobs as raw files to the file 
system, and merely store a reference to the filename in your database. If your 
blobs are rather small and guaranteed not to grow, forget my advice.
"

Do you agree with this information?

Regards.

Carlos



-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan Kennedy
Sent: segunda-feira, 23 de Junho de 2014 15:26
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Planner chooses incorrect index

On 06/23/2014 05:48 AM, João Ramos wrote:
> Here you go:
>
> sqlite_stat1 (before - good planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 14992 44 5 HistoryEntry 
> idx_HistoryEntry_historyOrder_historyTimestamp 14992 6 2 HistoryEntry 
> idx_HistoryEntry_sourceType_sourceId 14992 2999 2
>
> sqlite_stat1 (after - bad planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 15492 45 5 HistoryEntry 
> idx_HistoryEntry_historyOrder_historyTimestamp 15492 5 2 HistoryEntry 
> idx_HistoryEntry_sourceType_sourceId 15492 3099 2

Unfortunately the last column of the sqlite_stat4 data is missing, likely 
because it contains embedded 0x00 bytes. And without the
sqlite_stat4 data, it seems that SQLite picks the "good" query plan in either 
case.

Can you dump the sqlite_stat1 and sqlite_stat4 tables using the shell ".dump" 
command instead?

Thanks,
Dan.



>
> sqlite_stat4 (before - good planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 366 340 1 78 78 81 17
> 17 81 HistoryEntry idx_HistoryEntry_uri_historyOrder 722 489 1 566 566
> 661 38 96
> 661
> HistoryEntry idx_HistoryEntry_uri_historyOrder 25 25 1 1650 1650 1665
> 56
> 352 1665
> HistoryEntry idx_HistoryEntry_uri_historyOrder 862 747 1 2569 2569
> 2899 106
> 447 2899
> HistoryEntry idx_HistoryEntry_uri_historyOrder 862 1 1 2569 3331 3331
> 106
> 462 3331
> HistoryEntry idx_HistoryEntry_uri_historyOrder 2026 1789 1 3825 3825
> 4997 131
> 660 4997
> HistoryEntry idx_HistoryEntry_uri_historyOrder 143 113 1 6178 6178
> 6232 154
> 931 6232
> HistoryEntry idx_HistoryEntry_uri_historyOrder 181 180 1 6412 6412
> 6477 162
> 974 6477
> HistoryEntry idx_HistoryEntry_uri_historyOrder 2 2 1 6663 6663 6663
> 169 984
> 6663
> HistoryEntry idx_HistoryEntry_uri_historyOrder 1251 988 1 6953 6953
> 7488 186
> 1062 7488
> HistoryEntry idx_HistoryEntry_uri_historyOrder 379 370 1 8260 8260
> 8329 195
> 1361 8329
> HistoryEntry idx_HistoryEntry_uri_historyOrder 135 92 1 8915 8915 8965
> 218
> 1439 8965
> HistoryEntry idx_HistoryEntry_uri_historyOrder 196 194 1 9128 9128
> 9129 227
> 1501 9129
> HistoryEntry idx_HistoryEntry_uri_historyOrder 280 170 1 9622 9622
> 9650 237
> 1579 9650
> HistoryEntry idx_HistoryEntry_uri_historyOrder 1228 1045 1 9954 9954
> 9995 244
> 1709 9995
> HistoryEntry idx_HistoryEntry_uri_historyOrder 178 170 1 11192 11192
> 11322 246
> 1898 11322
> HistoryEntry idx_HistoryEntr

[sqlite] SQLite and BLOBs

2014-06-23 Thread Carlos Ferreira
Hello,

I found this reference in this link:

http://stackoverflow.com/questions/676524/how-to-update-piecewise-a-blob-in-sqlite


"
This is not directly an answer to your question, but I have some experience 
using random access for (big) blobs in SQLite, and I advise you against using 
it, if you can. Here's why:

Blobs break the SQL query format entirely. If your blob data needs any kind of 
processing, it will certainly at some point need filtering. Whatever mechanism 
you have in place to deal with filtering in SQL will be useless in this regard.

Dealing with binary blobs wrapped in databases opposed to binary data in raw 
files limits your options. You will not be able to randomly read and write to 
data at the same time from multiple processes, which is possible with files. 
You can't use any tool that deals with this data and provides only a file I/O 
interface. You can't truncate or resize the blob. Files are simply a lot more 
versatile.

It may seem convenient to have everything contained within one file, as it 
simplifies backup and transfer, but the pain of working with blobs is simply 
not worth it.

So as your attorney, I advise you to write your blobs as raw files to the file 
system, and merely store a reference to the filename in your database. If your 
blobs are rather small and guaranteed not to grow, forget my advice.
"

Do you agree with this information?

Regards.

Carlos



-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Dan Kennedy
Sent: segunda-feira, 23 de Junho de 2014 15:26
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Planner chooses incorrect index

On 06/23/2014 05:48 AM, João Ramos wrote:
> Here you go:
>
> sqlite_stat1 (before - good planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 14992 44 5 HistoryEntry 
> idx_HistoryEntry_historyOrder_historyTimestamp 14992 6 2 HistoryEntry 
> idx_HistoryEntry_sourceType_sourceId 14992 2999 2
>
> sqlite_stat1 (after - bad planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 15492 45 5 HistoryEntry 
> idx_HistoryEntry_historyOrder_historyTimestamp 15492 5 2 HistoryEntry 
> idx_HistoryEntry_sourceType_sourceId 15492 3099 2

Unfortunately the last column of the sqlite_stat4 data is missing, likely 
because it contains embedded 0x00 bytes. And without the
sqlite_stat4 data, it seems that SQLite picks the "good" query plan in either 
case.

Can you dump the sqlite_stat1 and sqlite_stat4 tables using the shell ".dump" 
command instead?

Thanks,
Dan.



>
> sqlite_stat4 (before - good planning)
> HistoryEntry idx_HistoryEntry_uri_historyOrder 366 340 1 78 78 81 17 
> 17 81 HistoryEntry idx_HistoryEntry_uri_historyOrder 722 489 1 566 566 
> 661 38 96
> 661
> HistoryEntry idx_HistoryEntry_uri_historyOrder 25 25 1 1650 1650 1665 
> 56
> 352 1665
> HistoryEntry idx_HistoryEntry_uri_historyOrder 862 747 1 2569 2569 
> 2899 106
> 447 2899
> HistoryEntry idx_HistoryEntry_uri_historyOrder 862 1 1 2569 3331 3331 
> 106
> 462 3331
> HistoryEntry idx_HistoryEntry_uri_historyOrder 2026 1789 1 3825 3825 
> 4997 131
> 660 4997
> HistoryEntry idx_HistoryEntry_uri_historyOrder 143 113 1 6178 6178 
> 6232 154
> 931 6232
> HistoryEntry idx_HistoryEntry_uri_historyOrder 181 180 1 6412 6412 
> 6477 162
> 974 6477
> HistoryEntry idx_HistoryEntry_uri_historyOrder 2 2 1 6663 6663 6663 
> 169 984
> 6663
> HistoryEntry idx_HistoryEntry_uri_historyOrder 1251 988 1 6953 6953 
> 7488 186
> 1062 7488
> HistoryEntry idx_HistoryEntry_uri_historyOrder 379 370 1 8260 8260 
> 8329 195
> 1361 8329
> HistoryEntry idx_HistoryEntry_uri_historyOrder 135 92 1 8915 8915 8965 
> 218
> 1439 8965
> HistoryEntry idx_HistoryEntry_uri_historyOrder 196 194 1 9128 9128 
> 9129 227
> 1501 9129
> HistoryEntry idx_HistoryEntry_uri_historyOrder 280 170 1 9622 9622 
> 9650 237
> 1579 9650
> HistoryEntry idx_HistoryEntry_uri_historyOrder 1228 1045 1 9954 9954 
> 9995 244
> 1709 9995
> HistoryEntry idx_HistoryEntry_uri_historyOrder 178 170 1 11192 11192 
> 11322 246
> 1898 11322
> HistoryEntry idx_HistoryEntry_uri_historyOrder 312 299 1 11503 11503 
> 11661 252
> 1963 11661
> HistoryEntry idx_HistoryEntry_uri_historyOrder 546 543 1 11921 11921 
> 12037 258
> 1994 12037
> HistoryEntry idx_HistoryEntry_uri_historyOrder 169 113 1 12574 12574 
> 12586 261
> 2013 12586
> HistoryEntry idx_HistoryEntry_uri_historyOrder 216 115 1 12862 12862 
> 12973 265
> 2187 12973
> HistoryEntry idx_HistoryEntry_uri_historyOrder 388 379 1 13101 13101 
> 13327 270
> 2291 13327
> HistoryEntry idx_HistoryEntry_uri_historyOrder 366 351 1 13722 13722 
> 13973 295
> 2388 13973
> HistoryEntry idx_HistoryEntry_uri_historyOrder 200 1 1 14482 14530 
> 14530 331
> 2749 14530
> HistoryEntry idx_HistoryEntry_uri_historyOrder 233 2 1 14684 14848 
> 14849 333
> 3067 14849
> HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 
> 0
> 1665 1665 0 1663 1665
> HistoryEntry idx_HistoryEntry_historyOrder_historyTimestamp 12006 1 1 

Re: [sqlite] slowish R*Tree performance

2014-06-16 Thread Carlos Ferreira
Hi,

I am just a user of binary Sqlite. So, I do not know how much of custom
Sqlite tricks can be made with a copy of sqlite source code.

However, and if I understood well the problem is like this:

1 - There a data type named IPV6 Address. 
2 - there is a table where this data type must be in. ( can be a set of
fields, one blob, one string ...)

You want to:

Given a certain IPV6, find in the database the existent IPV6 record with the
longest equal prefix to the given IPV6 value.


Again, if the problem is as I understood, the simplest solution is ( again I
am discussing it as if it could be implemented or available in SQLite..I do
not know..)

1  - encode the IPV6 field as a unique blob
2 - Implement an index to this field so that this particular field can be
ordered
3 - Make sure that the ordering compare function is a binary incremental
compare counting from the left ( in terms of the data...not sure how you
will represent it in the field )
4 - Each time you want to find the closed and longest prefix, you just need
to simulate an insert command.
Try to insert the given value. Instead of inserting, just return the ordered
position where it would be inserted. Check what is the record actually
standing in that ordered position...That would be your result!!


This can also be done outside Sqlite...but not sure if my understanding of
the problem is correct.

Regards,

Carlos

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Eric Rubin-Smith
Sent: domingo, 15 de Junho de 2014 05:25
To: General Discussion of SQLite Database
Subject: [sqlite] slowish R*Tree performance

I am exploring a mechanism for finding the longest covering IPv6 prefix for
a given IPv6 address by leveraging SQLite 3.8.5's R*Tree feature.  I'm
getting pretty bad performance and would like to know if I'm doing something
obviously wrong.

A 1-dimensional R*Tree of integers of width 128 bits would be ideal.  But
SQLite R*Trees are only 32 bits wide per dimension.

So I am modeling IPv6 prefixes as a pair of coordinates in 5-dimensional
integer space:

CREATE VIRTUAL TABLE ipIndex USING rtree_i32(
id, -- Integer primary key
minD1, maxD1,
minD2, maxD2,
minD3, maxD3,
minD4, maxD4,
minD5, maxD5
);

CREATE TABLE routeTarget(
id INTEGER PRIMARY KEY,
prefix TEXT NOT NULL,
target TEXT NOT NULL
);

To do this, I have come up with a mapping from an IPv6 prefix to a pair of
coordinates that has the geometric property that we would like: bounding box
1 contains bounding box 2 if and only if prefix 1 contains prefix 2.
So if a search for bounding boxes containing a particular address's
coordinate returns rows, then each of those rows corresponds to a covering
prefix -- from there, we must simply find the smallest bounding box to find
the longest-matching prefix.

Functionally, this appears to work like a charm.

Performance, unfortunately, leaves much to be desired.  I have seeded this
database with 100k randomly-generated prefixes, and am only able to push
through 250 searches per second.  I am hoping to increase the database size
to ~50m records and would like to hit 50k searches per second.

This is not an unreasonable request based on my hardware, and SQLite has
easily hit this throughput (at least, this order of magnitude in
throughput) in other applications.  For example, the analogue in IPv4 merely
requires a 1-dimensional R*Tree, and with that I was able to hit 10kTPS
throughput without breaking much of a sweat.

Here is my search query plan:

sqlite> explain query plan SELECT prefix, target FROM routeTarget WHERE 
sqlite> id
= (
   ...>SELECT id FROM ipIndex
   ...> WHERE minD1 <= 1220818432 and 1220818432 <= maxD1
   ...>   AND minD2 <= 2120561472 and 2120561472 <= maxD2
   ...>   AND minD3 <= 1685398080 and 1685398080 <= maxD3
   ...>   AND minD4 <= 1685755328 and 1685755328 <= maxD4
   ...>   AND minD5 <= 538331072 and 538331072 <= maxD5
   ...> ORDER BY ((maxD5-minD5)*(maxD4-minD4)*(maxD3-minD3)*
   ...>   (maxD2-minD2)*(maxD1-minD1)) ASC
   ...>LIMIT 1);
0|0|0|SEARCH TABLE routeTarget USING INTEGER PRIMARY KEY (rowid=?) 
0|0|0|EXECUTE SCALAR SUBQUERY 1
1|0|0|SCAN TABLE ipIndex VIRTUAL TABLE INDEX 2:B0D1B2D3B4D5B6D7B8D9 USE 
1|0|0|TEMP B-TREE FOR ORDER BY

I created a profiled SQLite build, and here are the top offenders for a run
on 1000 searches:

Each sample counts as 0.01 seconds.
  %   cumulative   self  self total
 time   seconds   secondscalls  ms/call  ms/call  name
 40.00  1.58 1.58   300179 0.01 0.01  sqlite3VdbeExec
  6.33  1.83 0.25 36628944 0.00 0.00  sqlite3VdbeMemMove
  6.08  2.07 0.24 18314472 0.00 0.00  rtreeColumn
  4.05  2.23 0.16  1665952 0.00 0.00  rtreeStepToLeaf
  2.41  2.33 0.10 55549722 0.00 0.00  sqlite3VdbeMemRelease
  2.28  2.42 0.09  1965104 0.00  

Re: [sqlite] slowish R*Tree performance

2014-06-16 Thread Carlos Ferreira
Regarding the R.Tree performance problem,

What is the original problem that is causing slow performance in the SQlite
R-Tree implementation?

Thanks

Carlos 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Simon Slavin
Sent: domingo, 15 de Junho de 2014 23:27
To: General Discussion of SQLite Database
Subject: Re: [sqlite] slowish R*Tree performance


> On 15 Jun 2014, at 5:21pm, Eric Rubin-Smith  wrote:
> 
> still not good enough for my use case
> (unfortunately).  Any further optimization tips are highly welcome.

Strongly suspect that although R*Trees produce an elegant solution to your
problem, the fact that they're a general case tool will make them too slow
to use for something like this.

I propose an alternative solution, though I have not tried it and do not
have time to try it (sorry).

1) A function which turns a numeric IP address or a block into some standard
easy-to-process representation in string form.  Possibly a pair of strings
with the first string being an address the second being something indicating
the extent of the block, perhaps something like
'2001:0db8:8500:::::v::ff00:::::
'.  You could make it shorter by leaving out the colons but my
experience is that although this leads to less data stored on disk it
doesn't speed up processing by much.  But if you have a great deal of data
you might want to do it anyway.

2) A comparator function (maybe a SQLite function, maybe not) which takes
two such addresses or blocks and returns a value indicating whether they're
identical, whether block A contains block or address B, or neither.

The closest I got to the above was when I needed a program which intensively
searched and sorted individual IPv4 addresses.  I got best results by
defining a SQLite function which converted IP addresses of all formats into
'standard format' where each byte was two hex digits.  All values stored in
the database were stored in my 'standard' format.  This allowed easy
collation using standard text sorting.  Everything else turned out faster to
implement in my own programming language than it was to build as SQLite
functions.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Constantine, really useful information.

Do you know if the compiled objs that may be available in Delphi are comparable 
in performance with the ones made by MS VS.

I know there is a tool, IMPLIB to import dlls or libs that can be from 
Microsoft and create Libs to use in Delphi..Not sure how they did it in 
Embarcadero..

Thanks

Carlos


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Constantine Yannakopoulos
Sent: sexta-feira, 9 de Maio de 2014 15:36
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bits Dll

On Fri, May 9, 2014 at 5:20 PM, Ralf Junker  wrote:

> FireDAC only, and outdated by 2 months at the day of release. Delphi 
> XE5 SQLite is still at 3.7.17, almost one year behind.
>

​In Delphi XE5 FireDAC either links statically the object files ​ 
​sqlite3_x86.obj ​/​sqlite3_x64.obj or loads sqlite3.dll and links to its 
functions via
GetProcAddress() depending to the compiler switch FireDAC_SQLITE_STATIC. So it 
should be possible to recompile the units FireDAC.Phys.SQLiteXXX.pas and either 
link in any later version of the sqlite3 object files or have it load the 
latest dll, provided that you have $(BDS)\source\data\firedac in your search 
path.

--Constantine
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Ralf,

Thanks for the insigth.

Regards.

Carlos

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf Junker
Sent: sexta-feira, 9 de Maio de 2014 15:20
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bits Dll

On 09.05.2014 13:50, Carlos Ferreira wrote:

> XE6 itself seems to native have access to both...

FireDAC only, and outdated by 2 months at the day of release. Delphi XE5
SQLite is still at 3.7.17, almost one year behind.

DISQLite3 has always been up to date for years and Delphi versions back to
Delphi 4:

   http://yunqa.de/delphi/doku.php/products/sqlite3/index

Ralf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Thanks for the reply.


I am doing very simple raw table inserts of big blobs, and also blob
incremental reading..

I am barely using the sqlite query system itself in this particular
performance bottleneck.

Thanks to a friend of this community, Constantine, I was able to get the
latest 32 and 64 bits, and it works perfectly.

Regards.

Carlos

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of RSmith
Sent: sexta-feira, 9 de Maio de 2014 14:57
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] 64 bits Dll


On 2014/05/09 12:36, Carlos Ferreira wrote:
> I am using a win32 DLL built "sqlite-dll-win32-x86-3071700" . the DLL 
> is from 20-5-2013.
>
>
> I am using it with Delphi, and it works fine for what I need.
>
>
> I need however a 64 bits version of the DLL. I actually have the64 
> bits
> 3.7.16 version that I downloaded from the SYNOPSE project from Arnaud 
> Bouchez
> (http://blog.synopse.info/post/2013/03/23/Latest-version-of-sqlite3.dl
> l-for- Windows-64-bit ). It is a great work and a very nice library.
>
>   
>
> However in my particular project, I write and load blobs, mainly, and 
> this version is twice as slow as the 3.7.17 I am using in 32 bits.

As others have already mentioned where to obtain the 64-bit DLLs etc, let me
just add that almost every later SQLite is faster than previous versions
(usually at the cost of using a little extra memory) so if you are
experiencing a regression in performance, it is very likely that the actual
DB or query is not optimised - I would start looking there, not at the
"version".

There have been quite a few cases where an un-analyzed 3.7 query executed a
bit faster than a 3.8 query simply because it chose the better execution
plan by sheer luck... the newest version is better at that but it might help
running the "ANALYZE" sql command on your data table or rethinking your
Indices.

Put the Schema and Query here, we might be able to spot you a possible
optimization.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Ralf,

Thanks.

XE6 itself seems to native have access to both...I have to test this.



-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Ralf Junker
Sent: sexta-feira, 9 de Maio de 2014 12:45
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bits Dll

On 09.05.2014 12:36, Carlos Ferreira wrote:

> I am using a win32 DLL built "sqlite-dll-win32-x86-3071700" . the DLL 
> is from 20-5-2013.
>
> I am using it with Delphi, and it works fine for what I need.
>
> I need however a 64 bits version of the DLL.

SQLite3 for Delphi, both Win32 and Win64, with many extensions and extras
here:

   http://yunqa.de/delphi/doku.php/products/sqlite3/index

Enjoy,

Ralf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Constantine,

Thank you for your feedback.

If you could make the win32 and win64 DLLs available I would really thank you.

If for some reason it does not work as I expect, probably I can try the 
suggestion you made.

My only concern is that I need the compiled DLls and the Sqlite itself compiled 
to be as fast and optimized as possible.

Tell me how you want to do this DLL exchange.

Thanks

Carlos


-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org] 
On Behalf Of Constantine Yannakopoulos
Sent: sexta-feira, 9 de Maio de 2014 11:57
To: General Discussion of SQLite Database
Subject: Re: [sqlite] 64 bits Dll

On Fri, May 9, 2014 at 1:36 PM, Carlos Ferreira <car...@csiberkeley.com>wrote:

> Can anyone tell me where can I find the 64 bits version of Sqlite.dll, 
> or can anyone compile it for me? It would work as a small consulting project.
>

​I can send you dlls compiled from the latest amalgamation ​(3.8.4.3). If you 
need an older version ​I have a Visual Studio 2010 project ​that I use to 
compile the amalgamation for Windows 32 and 64 bit. If you have VS2010 I would 
be happy to send it to you. All you will need to do is to replace the 
amalgamation source files (.c and .h) and definition file (.def) with that of 
the previous version.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 64 bits Dll

2014-05-09 Thread Carlos Ferreira
Hello,

 

 

I am using a win32 DLL built "sqlite-dll-win32-x86-3071700" . the DLL is
from 20-5-2013.

 

I am using it with Delphi, and it works fine for what I need.

 

I need however a 64 bits version of the DLL. I actually have the64 bits
3.7.16 version that I downloaded from the SYNOPSE project from Arnaud
Bouchez
(http://blog.synopse.info/post/2013/03/23/Latest-version-of-sqlite3.dll-for-
Windows-64-bit ). It is a great work and a very nice library.

 

However in my particular project, I write and load blobs, mainly, and this
version is twice as slow as the 3.7.17 I am using in 32 bits.

 

Can anyone tell me where can I find the 64 bits version of Sqlite.dll, or
can anyone compile it for me? It would work as a small consulting project.

 

I would prefer the DLL not to have the same name as the normal win32 DLL.for
instance sqlite3-64.dll

 

Regards and thanks to All.

 

Carlos

 

 

 

 

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Eficiency : 1 table vs several tables

2013-11-29 Thread Carlos Ferreira
Hi All,

 

Thank you all about the help on adding tables.

 

Adding the tables between  transactions did help a lot. Speed is now ok.

 

 

I received a suggestion that in my case is quite valid:

 

My tables have all the same number of columns ( 1 column only.. of BLOBs.)
and Simon Slavin suggested I could use only one big table where I add an
extra column to identify the table name.

 

This seems quite a nice approach, because it does seems to be more memory
efficient in terms of disk usage.

 

However my question is the following:

 

Let's assume that TABLE now designates my real tables that can be either
SQLITE tables or sub groups of records inside one big real SQLite table

 

I have to load to memory and save to DB groups of these TABLE at the same
time ( by saving I refer to update or save the blobs inside each table ).

 

What if  faster? 

 

Accessing a table in SQLite and updating deleting or adding new records 

 

Or

 

Querying the records of one table in such a way that the select records have
a field = Table Name..and then adding and updating these records.

 

( by updating I mean using the direct functions for incremental BLOB access
)

 

 

Thank you All

 

Carlos

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Big number of tables

2013-11-29 Thread Carlos Ferreira
Thank you guys.

I am going to try it.

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Stephan Beal
Sent: sexta-feira, 29 de Novembro de 2013 14:17
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Big number of tables

On Fri, Nov 29, 2013 at 2:37 PM, Igor Tandetnik <i...@tandetnik.org> wrote:

> On 11/29/2013 8:33 AM, Carlos Ferreira wrote:
>
>> Any of you know how to speed up the creation of empty tables in SQlite?
>>
>> If I have to create more than 1000 empty tables to initialize my 
>> application document it takes a while..
>>
>
> Make sure you run all CREATE TABLE statements within a single transaction.
> My guess is you don't, and then most of the time is spent in committing an
> implicit transaction after every statement.


Here's a simple test which shows that in action:

[stephan@host:~/tmp]$ i=0; while [ $i -lt 1000 ]; do echo "create table t$i
(a,b,c);"; i=$((i + 1)); done > foo.sql
[stephan@host:~/tmp]$ wc -l foo.sql
1000 foo.sql
[stephan@host:~/tmp]$ echo 'begin;' > bar.sql
[stephan@host:~/tmp]$ cat foo.sql >> bar.sql
[stephan@host:~/tmp]$ echo 'commit;' >> bar.sql
[stephan@host:~/tmp]$ time sqlite3 x.db < foo.sql

real 2m25.208s
user 0m0.380s
sys 0m0.468s
[stephan@host:~/tmp]$ rm x.db
[stephan@host:~/tmp]$ time sqlite3 x.db < bar.sql

real 0m0.344s
user 0m0.148s
sys 0m0.000s


BIG difference.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Since tyranny's the only guaranteed byproduct of those who insist on a
perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Big number of tables

2013-11-29 Thread Carlos Ferreira
 

Hello,

 

Any of you know how to speed up the creation of empty tables in SQlite?

 

If I have to create more than 1000 empty tables to initialize my application
document it takes a while..

 

 

Is there any workaround?

 

Thanks

 

Carlos

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users