[sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
All, I am having some problems with a new database that I am trying to setup. This database is a large file (about 8.7 GB without indexing). The problem I am having is that SELECT statements are extremely slow. The goal is to get the database file up and running for an embedded application (we

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
...@sqlite.org [sqlite-users-boun...@sqlite.org] on behalf of Simon Slavin [slav...@bigfraud.org] Sent: Friday, October 29, 2010 10:14 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] Slow SELECT Statements in Large Database file On 29 Oct 2010, at 5:07pm, Jonathan Haws wrote

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
. Michael D. Black Senior Scientist Advanced Analytics Directorate Northrop Grumman Information Systems From: sqlite-users-boun...@sqlite.org on behalf of Jonathan Haws Sent: Fri 10/29/2010 11:07 AM To: sqlite-users@sqlite.org Subject: EXTERNAL:[sqlite] Slow SELECT

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
inserted by area, you may have better luck. Jim -- HashBackup: easy onsite and offsite Unix backup http://sites.google.com/site/hashbackup On Fri, Oct 29, 2010 at 12:07 PM, Jonathan Haws <jonathan.h...@sdl.usu.edu>wrote: > All, > > I am having some problems with a new database t

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-10-29 Thread Jonathan Haws
AM, Jonathan Haws wrote: > We have a whole ton of points (3600^2) and a single select returns a single > point - though I may modify the select to return the four corners of the box > corresponding to the point that was entered. Are you aware that SQLite has an RTree extension (writt

[sqlite] Slow SELECT Statements in Large Database file

2010-10-30 Thread Jonathan Haws
All, I am having some problems with a new database that I am trying to setup. This database is a large file (about 8.7 GB without indexing). The problem I am having is that SELECT statements are extremely slow. The goal is to get the database file up and running for an embedded application (we

Re: [sqlite] Slow SELECT Statements in Large Database file

2010-11-01 Thread Jonathan Haws
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on behalf of Jonathan Haws [jonathan.h...@sdl.usu.edu] Sent: Friday, October 29, 2010 11:50 AM To: General Discussion of SQLite Database Subject: Re: [sqlite] Slow SELECT Statements in Large Database file I agree

[sqlite] Accessing multiple rows at once via a select statement

2010-12-06 Thread Jonathan Haws
I am having some troubles figuring out how I can access multiple rows in a table.. For example, I have a table that I am trying to "SELECT * FROM some_table;". For each row in the table, my callback function is called. What I need to do is have the callback write the data in the rows into

Re: [sqlite] Accessing multiple rows at once via a select statement

2010-12-06 Thread Jonathan Haws
: [sqlite] Accessing multiple rows at once via a select statement Quoth Jonathan Haws <jonathan.h...@sdl.usu.edu>, on 2010-12-06 22:51:16 +: > As an argument to the callback, I pass the address of the array. > However, I cannot change that address and have it persist through to >

[sqlite] Database sharing across processes

2012-07-05 Thread Jonathan Haws
I am fairly new to database development and I am working on an embedded system where we are utilizing SQLite to manage some files and other information that is being shared between processes. What I am doing is I have the SQLite amalgamation source code that I am compiling into each binary

Re: [sqlite] Database sharing across processes

2012-07-07 Thread Jonathan Haws
Thanks, Pavel, for answering my questions! >> I am also thinking that I may want to make use of the sqlite_unlock_notify() >> call to ensure that if I try to write to the database and it fails to get a >> lock, it will pend until it is available. However, I thought that a query >> would