Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
I was actually just reading through that.  I may switch over to that kind of 
implementation.

Thanks for the tip.

--
Jonathan R. Haws
Electrical Engineer
Space Dynamics Laboratory
(435) 797-4629
jh...@sdl.usu.edu



From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Roger Binns [rog...@rogerbinns.com]
Sent: Friday, October 29, 2010 12:09 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow SELECT Statements in Large Database file

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/29/2010 10:48 AM, Jonathan Haws wrote:
> We have a whole ton of points (3600^2) and a single select returns a single 
> point - though I may modify the select to return the four corners of the box 
> corresponding to the point that was entered.

Are you aware that SQLite has an RTree extension (written by one of the
SQLite developers) that seems to substantially overlap with what you are doing?

   http://www.sqlite.org/rtree.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzLDbwACgkQmOOfHg372QSayQCg2+cGQyf88up8V2MsqV6qIdq2
Vi0AoIpHo9HICTMVuYImqW2dr1E47Ddu
=eesF
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/29/2010 10:48 AM, Jonathan Haws wrote:
> We have a whole ton of points (3600^2) and a single select returns a single 
> point - though I may modify the select to return the four corners of the box 
> corresponding to the point that was entered.

Are you aware that SQLite has an RTree extension (written by one of the
SQLite developers) that seems to substantially overlap with what you are doing?

   http://www.sqlite.org/rtree.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzLDbwACgkQmOOfHg372QSayQCg2+cGQyf88up8V2MsqV6qIdq2
Vi0AoIpHo9HICTMVuYImqW2dr1E47Ddu
=eesF
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
I agree, and for that reason we are keeping the DB on an SSD.  Initial 
benchmarks show that we should be able to get the performance we need - I am 
just not getting it with my new database.

I am sure I have something setup wrong, but maybe it is just the fact that the 
indexing has not taken place yet.  Once that has finished I will report back 
and see what kind of performance I get.

Thanks,

--
Jonathan R. Haws
Electrical Engineer
Space Dynamics Laboratory
(435) 797-4629
jh...@sdl.usu.edu



From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Jim Wilcoxson [pri...@gmail.com]
Sent: Friday, October 29, 2010 10:29 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow SELECT Statements in Large Database file

Jonathan - 500 queries per second is 2ms per query.  You'll have a hard time
getting that kind of speed for random queries with any rotating media.  Your
database needs to be in memory - all of it, not just indexes - or on a flash
drive.

If your queries are not random but are somehow related, eg, you are doing
thousands of queries within a small area, and the db records were also
inserted by area, you may have better luck.

Jim
--
HashBackup: easy onsite and offsite Unix backup
http://sites.google.com/site/hashbackup


On Fri, Oct 29, 2010 at 12:07 PM, Jonathan Haws
wrote:

> All,
>
> I am having some problems with a new database that I am trying to setup.
>
> This database is a large file (about 8.7 GB without indexing).  The problem
> I am having is that SELECT statements are extremely slow.  The goal is to
> get the database file up and running for an embedded application (we have
> tons of storage space so the size is not a problem).
>
> Here is the schema layout:
>
> CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
> INTEGER, dted_lon INTEGER, dted_alt FLOAT);
>
> We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
> statement:
>
> SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);
>
> The numbers fed to dted_lat and dted_lon are typically on the order of
> 37 and -111.
>
> What can we do to speed up our SELECT statements?  Minutes is
> unacceptable for our application.  We were hoping we could run somewhere
> on the order of 500 queries per second and get valid results back.
>
> I am not an SQL expert, but I was reading about indexes that that it is
> best to have a specific index per SELECT.  Since we only have one,
> this is the index I am creating now (it has been creating this index on my
> machine for the past 10 minutes now):
>
> CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);
>
> Is that a good index for my SELECT?  Will it speed up the accesses?
>
> Any thoughts?
>
>
> Thanks!
> --
> Jonathan
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite with VS 2008

2010-10-29 Thread Teg
Linda,

I'm using the individual files to compile a static lib. Just create a
new project as a static lib. Unzip the source files into the project
folder and then add all the h and c files to the project. You'll have
to delete some but, you can figure that out when you compile.

SQLITE_CORE;
SQLITE_OMIT_AUTHORIZATION;
SQLITE_OMIT_AUTOVACUUM;
SQLITE_OMIT_BLOB_LITERAL;
SQLITE_OMIT_DATETIME_FUNCS;
SQLITE_OMIT_MEMORYDB;
SQLITE_OMIT_TRACE;
SQLITE_OMIT_CURSOR

These are defines I use when I compile.

I then include the lib project in my main project, set dependencies
so, it automatically links in the lib and manually include the new
project folder in my header search list.

"Add Existing Project" in the "Add" right click of the solution
explorer.

The amalgamation works but, you won't be able to single step in and
debug the lib if you use it. It's a limitation of the compiler.



C



Friday, October 29, 2010, 1:28:27 PM, you wrote:

SAM> I would suggest using the pre-compiled binary sqlite3.dll and just call
SAM> into the DLL.  You can easily create an import library to link with in
SAM> your application by using the LIB /DEF command:

SAM> lib /def:sqlite3.def

SAM> That will create sqlite3.lib and sqlite3.exp.  You can link against the
SAM> sqlite3.lib file and it will use the sqlite3.dll.

SAM> -Scott

SAM> sqlite-users-boun...@sqlite.org wrote on 10/29/2010 09:42:09 AM:

>> > I am trying to use sqlite in the debea database library.
>> > 
>> > http://debea.net/trac/wiki/CompileSvn
>> > 
>> > It simply says compile sqlite3.
>> > 
>> > I am however a VS2008 user.  I cannot find a way to compile it so I 
SAM> can
>> > reference the include and library directories.  Can you help?
>> > 
>> > Linda Rawson
>> 
>> Linda,
>> 
>> The only way that I know of to compile SQLite in VS 2008 is to use 
>> the amalgamation download.  You can find it here:  http://www.
>> sqlite.org/download.html.  This will be only a single .c file.
>> 
>> I'm not sure this will meet your needs if you need to reference the 
>> include folder.  You may need to explore building the TEA tarball. 
>> I've never tried it in windows, but it look like there is guidance 
>> on the TEA website http://www.tcl.tk/doc/tea/.
>> 
>> Has anyone tried this in Windows?
>> 
>> RW
>> 
>> Ron Wilson, Engineering Project Lead
>> (o) 434.455.6453, (m) 434.851.1612, www.harris.com
>> 
>> HARRIS CORPORATION   |   RF Communications Division 
>> assuredcommunications(tm)
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

SAM> ___
SAM> sqlite-users mailing list
SAM> sqlite-users@sqlite.org
SAM> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
Best regards,
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
We have a whole ton of points (3600^2) and a single select returns a single 
point - though I may modify the select to return the four corners of the box 
corresponding to the point that was entered.

We had a hash table implementation that did not work very well.  The problem 
with it was that we could only keep a single DTED square in our limited amount 
of memory (this is for an embedded application).  When we had to switch 
squares, the whole process would grind to a halt while it brought in the new 
square to memory and evicted the other one.  From our initial benchmarks with 
SQLite, this problem can be avoided - even though we do have the overhead of a 
huge DB file.

When I create the database, I do make sure that everything is inserted in 
dted_lat order for that exact reason.

Thanks for the help.

--
Jonathan R. Haws
Electrical Engineer
Space Dynamics Laboratory
(435) 797-4629
jh...@sdl.usu.edu



From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Black, Michael (IS) [michael.bla...@ngc.com]
Sent: Friday, October 29, 2010 10:24 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow SELECT Statements in Large Database file

You should see a noticeable increase in speed with the index you show.

You'll want to make sure your data has been inserted in dted_lat order or 
dted_lon order.
That way all the matching records will be colocated on disk and in memory.   
Otherwise it's going to do a lot of paging to get your records.

#1 How many points do you have?
#2 How many points are returned from a select?
#3 Have you considered just building a hash table to do this and skip the 
database?  If you're only doing this one select you don't need the overhead of 
a database.

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems




From: sqlite-users-boun...@sqlite.org on behalf of Jonathan Haws
Sent: Fri 10/29/2010 11:07 AM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] Slow SELECT Statements in Large Database file



All,

I am having some problems with a new database that I am trying to setup.

This database is a large file (about 8.7 GB without indexing).  The problem
I am having is that SELECT statements are extremely slow.  The goal is to
get the database file up and running for an embedded application (we have
tons of storage space so the size is not a problem).

Here is the schema layout:

CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
INTEGER, dted_lon INTEGER, dted_alt FLOAT);

We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
statement:

SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);

The numbers fed to dted_lat and dted_lon are typically on the order of
37 and -111.

What can we do to speed up our SELECT statements?  Minutes is
unacceptable for our application.  We were hoping we could run somewhere
on the order of 500 queries per second and get valid results back.

I am not an SQL expert, but I was reading about indexes that that it is
best to have a specific index per SELECT.  Since we only have one,
this is the index I am creating now (it has been creating this index on my
machine for the past 10 minutes now):

CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);

Is that a good index for my SELECT?  Will it speed up the accesses?

Any thoughts?


Thanks!
--
Jonathan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
Yeah, I am letting it run and will check to see how things work once it is 
finished.  I expect things to speed up quite a bit.

--
Jonathan R. Haws
Electrical Engineer
Space Dynamics Laboratory
(435) 797-4629
jh...@sdl.usu.edu



From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Simon Slavin [slav...@bigfraud.org]
Sent: Friday, October 29, 2010 10:14 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Slow SELECT Statements in Large Database file

On 29 Oct 2010, at 5:07pm, Jonathan Haws wrote:

> SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);
>
> The numbers fed to dted_lat and dted_lon are typically on the order of
> 37 and -111.
>
> What can we do to speed up our SELECT statements?  Minutes is
> unacceptable for our application.  We were hoping we could run somewhere
> on the order of 500 queries per second and get valid results back.
>
> I am not an SQL expert, but I was reading about indexes that that it is
> best to have a specific index per SELECT.  Since we only have one,
> this is the index I am creating now (it has been creating this index on my
> machine for the past 10 minutes now):
>
> CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);

Yes, that's a good index for that particular SELECT.  It will make that SELECT 
return results in tiny fractions of a second.  And with an 8.7 Gig database it 
will take some time to create the index: perhaps even hours, depending on your 
platform.

Don't worry about it: you seem to be doing the right thing.  Just let it run.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite with VS 2008

2010-10-29 Thread Scott A Mintz
I would suggest using the pre-compiled binary sqlite3.dll and just call 
into the DLL.  You can easily create an import library to link with in 
your application by using the LIB /DEF command:

lib /def:sqlite3.def

That will create sqlite3.lib and sqlite3.exp.  You can link against the 
sqlite3.lib file and it will use the sqlite3.dll.

-Scott

sqlite-users-boun...@sqlite.org wrote on 10/29/2010 09:42:09 AM:

> > I am trying to use sqlite in the debea database library.
> > 
> > http://debea.net/trac/wiki/CompileSvn
> > 
> > It simply says compile sqlite3.
> > 
> > I am however a VS2008 user.  I cannot find a way to compile it so I 
can
> > reference the include and library directories.  Can you help?
> > 
> > Linda Rawson
> 
> Linda,
> 
> The only way that I know of to compile SQLite in VS 2008 is to use 
> the amalgamation download.  You can find it here:  http://www.
> sqlite.org/download.html.  This will be only a single .c file.
> 
> I'm not sure this will meet your needs if you need to reference the 
> include folder.  You may need to explore building the TEA tarball. 
> I've never tried it in windows, but it look like there is guidance 
> on the TEA website http://www.tcl.tk/doc/tea/.
> 
> Has anyone tried this in Windows?
> 
> RW
> 
> Ron Wilson, Engineering Project Lead
> (o) 434.455.6453, (m) 434.851.1612, www.harris.com
> 
> HARRIS CORPORATION   |   RF Communications Division 
> assuredcommunications(tm)
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multilple tables with the same schema and prepare query performance

2010-10-29 Thread john Papier
No I haven't yet measured it.. I was only in the process of designing the
database layout...
Given that my queries are very simple, it may be fine to do the
prepare_query every time..

I will do some perf testing and reply back.

On Fri, Oct 29, 2010 at 9:40 AM, Pavel Ivanov  wrote:

> > order of 10 to 100 of these tables. When doing operations on these
> tables, I
> > want to avoid having to do a prepare_query every time for performance
> > reasons.
>
> Did you measure your performance and find that prepare_query is a
> bottleneck?
>
> > Since the tables have exactly the same schema, in theory I should
> > be able to use the same prepared statement on any one of those tables.
> Any
> > ideas on if this is possible?
>
> No, it's not possible, because prepared query contains information
> about the tables used by the query.
>
>
> Pavel
>
> On Fri, Oct 29, 2010 at 11:52 AM, john Papier 
> wrote:
> > Hi,
> >
> > I need to create multiple tables all having the same schema. The
> > number/names of the tables will by dynamic. There would be somewhere in
> the
> > order of 10 to 100 of these tables. When doing operations on these
> tables, I
> > want to avoid having to do a prepare_query every time for performance
> > reasons. Since the tables have exactly the same schema, in theory I
> should
> > be able to use the same prepared statement on any one of those tables.
> Any
> > ideas on if this is possible?
> >
> > The other options I'm looking at are:
> > 1. dynamically caching prepared queries
> > 2. use only one table with an extra index. With only 10-100 tables merged
> > into one, I figure the extra index would take an extra log100 -> ~10
> lookups
> > in the binary search. The thing is, I need to keep a cursor to where in
> the
> > table I was last searching, so I can continue the search from where I
> left
> > off, which is why using multiple tables was preferable; i.e., i can track
> > the row_id, and then resume the search there (WHERE row_id >
> cursor_row_id).
> >
> > Any ideas?
> >
> > Thanks,
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multilple tables with the same schema and prepare query performance

2010-10-29 Thread Pavel Ivanov
> order of 10 to 100 of these tables. When doing operations on these tables, I
> want to avoid having to do a prepare_query every time for performance
> reasons.

Did you measure your performance and find that prepare_query is a bottleneck?

> Since the tables have exactly the same schema, in theory I should
> be able to use the same prepared statement on any one of those tables. Any
> ideas on if this is possible?

No, it's not possible, because prepared query contains information
about the tables used by the query.


Pavel

On Fri, Oct 29, 2010 at 11:52 AM, john Papier  wrote:
> Hi,
>
> I need to create multiple tables all having the same schema. The
> number/names of the tables will by dynamic. There would be somewhere in the
> order of 10 to 100 of these tables. When doing operations on these tables, I
> want to avoid having to do a prepare_query every time for performance
> reasons. Since the tables have exactly the same schema, in theory I should
> be able to use the same prepared statement on any one of those tables. Any
> ideas on if this is possible?
>
> The other options I'm looking at are:
> 1. dynamically caching prepared queries
> 2. use only one table with an extra index. With only 10-100 tables merged
> into one, I figure the extra index would take an extra log100 -> ~10 lookups
> in the binary search. The thing is, I need to keep a cursor to where in the
> table I was last searching, so I can continue the search from where I left
> off, which is why using multiple tables was preferable; i.e., i can track
> the row_id, and then resume the search there (WHERE row_id > cursor_row_id).
>
> Any ideas?
>
> Thanks,
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Gabor Grothendieck
On Fri, Oct 29, 2010 at 12:07 PM, Jonathan Haws
 wrote:
> All,
>
> I am having some problems with a new database that I am trying to setup.
>
> This database is a large file (about 8.7 GB without indexing).  The problem
> I am having is that SELECT statements are extremely slow.  The goal is to
> get the database file up and running for an embedded application (we have
> tons of storage space so the size is not a problem).
>
> Here is the schema layout:
>
> CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
> INTEGER, dted_lon INTEGER, dted_alt FLOAT);
>
> We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
> statement:
>
> SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);
>
> The numbers fed to dted_lat and dted_lon are typically on the order of
> 37 and -111.
>
> What can we do to speed up our SELECT statements?  Minutes is
> unacceptable for our application.  We were hoping we could run somewhere
> on the order of 500 queries per second and get valid results back.
>
> I am not an SQL expert, but I was reading about indexes that that it is
> best to have a specific index per SELECT.  Since we only have one,
> this is the index I am creating now (it has been creating this index on my
> machine for the past 10 minutes now):
>
> CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);
>
> Is that a good index for my SELECT?  Will it speed up the accesses?
>
> Any thoughts?
>
>

Google for the spatialite extension.

-- 
Statistics & Software Consulting
GKX Group, GKX Associates Inc.
tel: 1-877-GKX-GROUP
email: ggrothendieck at gmail.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jim Wilcoxson
Jonathan - 500 queries per second is 2ms per query.  You'll have a hard time
getting that kind of speed for random queries with any rotating media.  Your
database needs to be in memory - all of it, not just indexes - or on a flash
drive.

If your queries are not random but are somehow related, eg, you are doing
thousands of queries within a small area, and the db records were also
inserted by area, you may have better luck.

Jim
--
HashBackup: easy onsite and offsite Unix backup
http://sites.google.com/site/hashbackup


On Fri, Oct 29, 2010 at 12:07 PM, Jonathan Haws
wrote:

> All,
>
> I am having some problems with a new database that I am trying to setup.
>
> This database is a large file (about 8.7 GB without indexing).  The problem
> I am having is that SELECT statements are extremely slow.  The goal is to
> get the database file up and running for an embedded application (we have
> tons of storage space so the size is not a problem).
>
> Here is the schema layout:
>
> CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
> INTEGER, dted_lon INTEGER, dted_alt FLOAT);
>
> We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
> statement:
>
> SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);
>
> The numbers fed to dted_lat and dted_lon are typically on the order of
> 37 and -111.
>
> What can we do to speed up our SELECT statements?  Minutes is
> unacceptable for our application.  We were hoping we could run somewhere
> on the order of 500 queries per second and get valid results back.
>
> I am not an SQL expert, but I was reading about indexes that that it is
> best to have a specific index per SELECT.  Since we only have one,
> this is the index I am creating now (it has been creating this index on my
> machine for the past 10 minutes now):
>
> CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);
>
> Is that a good index for my SELECT?  Will it speed up the accesses?
>
> Any thoughts?
>
>
> Thanks!
> --
> Jonathan
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Black, Michael (IS)
You should see a noticeable increase in speed with the index you show.
 
You'll want to make sure your data has been inserted in dted_lat order or 
dted_lon order.
That way all the matching records will be colocated on disk and in memory.   
Otherwise it's going to do a lot of paging to get your records.
 
#1 How many points do you have?
#2 How many points are returned from a select?
#3 Have you considered just building a hash table to do this and skip the 
database?  If you're only doing this one select you don't need the overhead of 
a database.
 
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Jonathan Haws
Sent: Fri 10/29/2010 11:07 AM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] Slow SELECT Statements in Large Database file



All,

I am having some problems with a new database that I am trying to setup.

This database is a large file (about 8.7 GB without indexing).  The problem
I am having is that SELECT statements are extremely slow.  The goal is to
get the database file up and running for an embedded application (we have
tons of storage space so the size is not a problem).

Here is the schema layout:

CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
INTEGER, dted_lon INTEGER, dted_alt FLOAT);

We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
statement:

SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);

The numbers fed to dted_lat and dted_lon are typically on the order of
37 and -111.

What can we do to speed up our SELECT statements?  Minutes is
unacceptable for our application.  We were hoping we could run somewhere
on the order of 500 queries per second and get valid results back.

I am not an SQL expert, but I was reading about indexes that that it is
best to have a specific index per SELECT.  Since we only have one,
this is the index I am creating now (it has been creating this index on my
machine for the past 10 minutes now):

CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);

Is that a good index for my SELECT?  Will it speed up the accesses?

Any thoughts?


Thanks!
--
Jonathan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multilple tables with the same schema and prepare query performance

2010-10-29 Thread john Papier
I know the order of rows is possible and efficient if your query plan is
using the special ROWID:

SELECT ROWID, col1, col2 from Table ORDER BY ROWID.

the ORDER BY is a no-op in this case (see
http://www.sqlite.org/queryplanner.html)
Say you iterate 10 rows, and later want to continue where you left off.. you
can just do the following:

SELECT ROWID, col1, col2 from Table WHERE ROWID > curosr_row_id ORDER BY
ROWID

and this would be efficient

In the more general case with a WHERE clause, which I believe you were
referring to, then yes rows could come in any order, else you'd need an
explcity ORDER BY, which would make the query more expensive.



On Fri, Oct 29, 2010 at 9:11 AM, Simon Slavin  wrote:

>
> On 29 Oct 2010, at 4:52pm, john Papier wrote:
>
> > The thing is, I need to keep a cursor to where in the
> > table I was last searching, so I can continue the search from where I
> left
> > off, which is why using multiple tables was preferable; i.e., i can track
> > the row_id, and then resume the search there (WHERE row_id >
> cursor_row_id).
>
> This does not work in SQL.  There is no concept of 'the order of rows in
> the table' in SQL and an identical SELECT command is perfectly free to
> return rows in a different order the second time around.
>
> If you want to maintain your own row orders, create and store your own row
> numbers which do whatever /you/ want.  The code for doing this doesn't
> change much whether you have one huge table or many small ones.  Either you
> create another table to store the maximum current row numbers in or you use
> the max(id) function to figure out the biggest number you've used so far.
>
> I suspect that the most efficient way to do this would be to have one big
> table rather than many small ones, but I have no proof for your particular
> application.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pages with all zeros

2010-10-29 Thread Scott Weigand

Hi List,
I'm looking through some SQLite databases that are used by Google Chrome on Mac 
OSX 10.6.4 and find some cases where entire pages are filled with 0x00.  I 
checked and there are no corresponding journal files.  I also verified that 
Google Chrome was not running at the time.  Would there be any documented 
instances where this is supposed to happen?
Thanks,Scott  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Simon Slavin

On 29 Oct 2010, at 5:07pm, Jonathan Haws wrote:

> SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);
> 
> The numbers fed to dted_lat and dted_lon are typically on the order of
> 37 and -111.
> 
> What can we do to speed up our SELECT statements?  Minutes is
> unacceptable for our application.  We were hoping we could run somewhere
> on the order of 500 queries per second and get valid results back.
> 
> I am not an SQL expert, but I was reading about indexes that that it is
> best to have a specific index per SELECT.  Since we only have one,
> this is the index I am creating now (it has been creating this index on my
> machine for the past 10 minutes now):
> 
> CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);

Yes, that's a good index for that particular SELECT.  It will make that SELECT 
return results in tiny fractions of a second.  And with an 8.7 Gig database it 
will take some time to create the index: perhaps even hours, depending on your 
platform.

Don't worry about it: you seem to be doing the right thing.  Just let it run.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multilple tables with the same schema and prepare query performance

2010-10-29 Thread Simon Slavin

On 29 Oct 2010, at 4:52pm, john Papier wrote:

> The thing is, I need to keep a cursor to where in the
> table I was last searching, so I can continue the search from where I left
> off, which is why using multiple tables was preferable; i.e., i can track
> the row_id, and then resume the search there (WHERE row_id > cursor_row_id).

This does not work in SQL.  There is no concept of 'the order of rows in the 
table' in SQL and an identical SELECT command is perfectly free to return rows 
in a different order the second time around.

If you want to maintain your own row orders, create and store your own row 
numbers which do whatever /you/ want.  The code for doing this doesn't change 
much whether you have one huge table or many small ones.  Either you create 
another table to store the maximum current row numbers in or you use the 
max(id) function to figure out the biggest number you've used so far.

I suspect that the most efficient way to do this would be to have one big table 
rather than many small ones, but I have no proof for your particular 
application.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
All,

I am having some problems with a new database that I am trying to setup.

This database is a large file (about 8.7 GB without indexing).  The problem
I am having is that SELECT statements are extremely slow.  The goal is to
get the database file up and running for an embedded application (we have
tons of storage space so the size is not a problem).

Here is the schema layout:

CREATE table Dted (dtedkey INTEGER PRIMARY KEY, dted_lat
INTEGER, dted_lon INTEGER, dted_alt FLOAT);

We lookup dted_alt based on dted_lat and dted_lon.  Here is our SELECT
statement:

SELECT dted_alt FROM Dted WHERE (dted_lat=%d AND dted_lon=%d);

The numbers fed to dted_lat and dted_lon are typically on the order of
37 and -111.

What can we do to speed up our SELECT statements?  Minutes is
unacceptable for our application.  We were hoping we could run somewhere
on the order of 500 queries per second and get valid results back.

I am not an SQL expert, but I was reading about indexes that that it is
best to have a specific index per SELECT.  Since we only have one,
this is the index I am creating now (it has been creating this index on my
machine for the past 10 minutes now):

CREATE INDEX idx_ll ON Dted(dted_lat,dted_lon);

Is that a good index for my SELECT?  Will it speed up the accesses?

Any thoughts?


Thanks!
--
Jonathan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] WAL mode enhancement suggestion

2010-10-29 Thread Bob Smith
About a week ago, I posted this inside a response on another thread.
It may have gotten lost in the mix. Curious what folks think.

Considering things like bulk deletes (and updates) potentially really
growing a WAL file to be quite large along with having a system that
is constantly running and inserting data into the database over a long
period of time with reads also coming in,  I wonder about the value of
adding an optional feature to sqlite.

What if there was an API to to specify a max desired limit to the
physical size of the WAL file? Whenever a checkpoint was 100%
successful and it was determined that the entire WAL has been
transferred into the database and synced and  no readers are making
use of the WAL, then in addition to the writer rewinding the WAL back
the beginning, the WAL file would be truncated IF this option was
configured and the physical size of the WAL file was greater than the
specified value.

This seems like it would be simple to implement without costing
anything by default to those who don't configure it...  If I was to
use such a feature and was to do the default 1000 page checkpoint
(which seems to correspond to a little over a 1MB WAL file size), I
would make the physical limit something like 50MB or 100MB. Under
normal conditions, the limit would never be reached anyway. But, in
the case where a large WAL file did get created at some point, this
could be used to get it truncated. Thoughts?

Best Regards,

Bob
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multilple tables with the same schema and prepare query performance

2010-10-29 Thread john Papier
Hi,

I need to create multiple tables all having the same schema. The
number/names of the tables will by dynamic. There would be somewhere in the
order of 10 to 100 of these tables. When doing operations on these tables, I
want to avoid having to do a prepare_query every time for performance
reasons. Since the tables have exactly the same schema, in theory I should
be able to use the same prepared statement on any one of those tables. Any
ideas on if this is possible?

The other options I'm looking at are:
1. dynamically caching prepared queries
2. use only one table with an extra index. With only 10-100 tables merged
into one, I figure the extra index would take an extra log100 -> ~10 lookups
in the binary search. The thing is, I need to keep a cursor to where in the
table I was last searching, so I can continue the search from where I left
off, which is why using multiple tables was preferable; i.e., i can track
the row_id, and then resume the search there (WHERE row_id > cursor_row_id).

Any ideas?

Thanks,
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fw: sqlite3 support

2010-10-29 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/28/2010 11:47 PM, lizhe wrote:
> I am writing to enquire about a bug we found. 

http://www.chiark.greenend.org.uk/~sgtatham/bugs-cn.html

> I would like get support for SQLite ?

You will need to do what the page above says in order for anyone to
reproduce your problem.  If you want free support from others on this
mailing list then it is essential.

You can also get paid support for SQLite:

  http://www.hwaci.com/sw/sqlite/prosupport.html

> "The database disk image is malformed"

Also make sure you have read this:

  http://www.sqlite.org/lockingv3.html#how_to_corrupt

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzK18AACgkQmOOfHg372QSZ6ACfeT7PkNx3WQW+0uTUtnavsNfj
VLMAoJDIjiN9V9RpIqBnyMFcJrbubZGE
=Lu5C
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite with VS 2008

2010-10-29 Thread Black, Michael (IS)
Thanks for pointing out this project...I may be able to use this in my current 
effort.
 
Download the amalgamation
http://www.sqlite.org/sqlite-amalgamation-3_7_3.zip
Download the windows binaries
http://www.sqlite.org/sqlitedll-3_7_3.zip
 
Extract the sqlite3.h from the amalgation and the sqlite3.dll and sqlite3.def 
files from the binaries somewhere -- like maybe in your debea directory.
In my case it was a subdirectory under the debea svn checkout
D:\Projects\debea\sqlite3
 
 
>From VS command window build the .lib file
cd \projects\debea\sqlite3
lib /def:sqlite3.def
mkdir lib
mkdir include
move *.dll lib
move *.lib lib
move *.h include
WXW
Setup your environment variable
set SQLITE3_PATH=D:\Projects\debea\sqlite3
 
Modify the debea_dev_build.bat
SET DBA_PLUGINS=CSV=1 SQLITE3=1 ODBC=0 PGSQL=0 XML=0
SET DEBEA_CONFIGURE_FLAGS=DEBUG=0 TESTS=0
set SOURCES_DIR=D:\Projects\debea
SET DEVEL=D:\Projects\debeabuild
SET BUILD_DIR=D:\Projects\debeabuild
 
Comment out all the lines in the wxdba library build (I don't need it -- I 
don't know about you -- if you want the WX stuff you need to set WXWIN)
REM build wxdba library - needs wxWidgets to build
 
Build it
 
You should end up with dba.lib in your DEVEL\lib directory
 
You'll need to put the sqlite3.dll in your path somewhere so your other 
projects can find it.
 
Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Northrop Grumman Information Systems
 



From: sqlite-users-boun...@sqlite.org on behalf of Linda Rawson
Sent: Thu 10/28/2010 12:32 PM
To: sqlite-users@sqlite.org
Subject: EXTERNAL:[sqlite] Sqlite with VS 2008



I am trying to use sqlite in the debea database library.

http://debea.net/trac/wiki/CompileSvn

It simply says compile sqlite3.

I am however a VS2008 user.  I cannot find a way to compile it so I can
reference the include and library directories.  Can you help?

Linda Rawson
Sensory Technology Consultants
  http://www.sensorytech.net 
 
Phone:  801-791-9222 | Fax:  888-294-6706
Email:   linda.raw...@sensorytech.net

A Woman-Owned, 8(a) Minority, Small Business

The NATURAL PROGRESSION to technology, consulting and system integration...

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite with VS 2008

2010-10-29 Thread Wilson, Ronald
> I am trying to use sqlite in the debea database library.
> 
> http://debea.net/trac/wiki/CompileSvn
> 
> It simply says compile sqlite3.
> 
> I am however a VS2008 user.  I cannot find a way to compile it so I can
> reference the include and library directories.  Can you help?
> 
> Linda Rawson

Linda,

The only way that I know of to compile SQLite in VS 2008 is to use the 
amalgamation download.  You can find it here:  
http://www.sqlite.org/download.html.  This will be only a single .c file.

I'm not sure this will meet your needs if you need to reference the include 
folder.  You may need to explore building the TEA tarball.  I've never tried it 
in windows, but it look like there is guidance on the TEA website 
http://www.tcl.tk/doc/tea/.

Has anyone tried this in Windows?

RW

Ron Wilson, Engineering Project Lead
(o) 434.455.6453, (m) 434.851.1612, www.harris.com

HARRIS CORPORATION   |   RF Communications Division 
assuredcommunications(tm)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fw: sqlite3 support

2010-10-29 Thread lizhe

- Original Message - 
From: lizhe 
To: d...@hwaci.com 
Cc: us...@sqlite.org 
Sent: Friday, October 29, 2010 2:41 PM
Subject: sqlite3 support


您好:
我们使用的SQLITE3 
数据库版本为3.6.23.1版本,在红帽编译器下运行,运行的硬件系统为LINUX(2.6版本),现在出现了一个问题,长时间运行后会出现查询数据库返回"The 
database disk image is malformed",
请问这是什么原因造成的.如何解决,使用最新版本3.7.2是否会避免出现这个问题?


Dear Sir:
 
I am writing to enquire about a bug we found. Now the SQLITE3 database we 
use is version 3.6.23.1, which running in the red hat compiler and the hardware 
system is Linux(version 2.6).We have a problem that for select 
database(SQL),return "The database disk image is malformed" ,How to solve my 
trouble? Is use version 3.7.2 ? Do you have any idea about this? I would like 
get support for SQLite ?
 Wish you happiness !


Yours sincerely,

  Lizhe
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite with VS 2008

2010-10-29 Thread Linda Rawson
I am trying to use sqlite in the debea database library.

http://debea.net/trac/wiki/CompileSvn

It simply says compile sqlite3.

I am however a VS2008 user.  I cannot find a way to compile it so I can
reference the include and library directories.  Can you help?

Linda Rawson
Sensory Technology Consultants
  http://www.sensorytech.net
Phone:  801-791-9222 | Fax:  888-294-6706
Email:   linda.raw...@sensorytech.net

A Woman-Owned, 8(a) Minority, Small Business

The NATURAL PROGRESSION to technology, consulting and system integration... 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users