[sqlite] data type problem

2007-03-19 Thread qinligeng
if you create a table use following statement (script generated from MS SQL 
Server 2000)
CREATE TABLE [XTollData] (
 [DutyID] [char] (32) NOT NULL ,
 [CarNumber] [char] (10) NULL 
);

SQLite3_Column_decltype will treat DutyID as data type 'char' but not 'char(32)'

Re: [sqlite] a problem trying to invoke sqlite3 commands from a C application

2007-03-19 Thread Dan Kennedy
On Tue, 2007-03-20 at 02:44 +0200, Rafi Cohen wrote:
> Hi, I'm calling system from a C application in order to invoke sqlite3
> database, then continue with commands: .separator "," and .import file
> table, all in a single string as argument to system.
> sqlite3 is indeed invoked with the correct database, but the problem is
> that the .import command is reported as part of .separator argument.
> There is just a blank between the 2 commands. Should there be a
> different separation betrween the 2 commands?

You probably need a newline following each individual dot-command:

  ".import file tbl\n.dump"


Dan.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Anyone Having problems with SQLite 3.13 on WinMobile?

2007-03-19 Thread Chris Hodges

Hello:

I recently upgraded to the latest version of SQLite 3.13 and ever since 
then I have been experiencing numerous SQLITE_MISUSE[21] error message.  
But when I run the same code under Win32 or the PocketPC Emulator the 
code works fine.


Now previously I was using SQLite 3.08 or 3.09 and all was well.  I know 
my question is somewhat nebulous but I thought maybe there is something 
I need to do during compile time to work better with the PocketPC platform?


thanks!

Chris




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Question about speed of CASE WHEN

2007-03-19 Thread T

Hi RBS,

I use SQLite as a data manipulator, not as a database. I get data  
from a
server database, dump to SQLite, manipulate the data and finally  
dump to
Excel. As this is reporting software speed is important, so I will  
go with

the fastest method.


OK, I have to ask. What do you then do with the data in Excel? I  
spend so much of my time with clients converting them from using  
spreadsheets (80% of the time when it's more appropriate) to using a  
database, that my ears prick up whenever I hear someone doing the  
reverse.


Can you create whatever facilities you're using in Excel, directly in  
the SQLite database (eg via CREATE VIEW)? Or is it just a case of  
needing to view the final data in a user friendly environment?


Tom


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] a problem trying to invoke sqlite3 commands from a C application

2007-03-19 Thread Rafi Cohen
Hi, I'm calling system from a C application in order to invoke sqlite3
database, then continue with commands: .separator "," and .import file
table, all in a single string as argument to system.
sqlite3 is indeed invoked with the correct database, but the problem is
that the .import command is reported as part of .separator argument.
There is just a blank between the 2 commands. Should there be a
different separation betrween the 2 commands? May they be executed one
after the other in a single call to system or should I call system twice
for each command?
Thanks, Rafi.


[sqlite] Using sqlite-3.3.3 on linux in mod_perl environment

2007-03-19 Thread Thomas Kwan

Hi there,

I want to see if there is anyone using sqlite-3.3.3 with perl-DBD-SQLite
in a mod_perl environment where multiple threads will be present.

The FAQ indicates the default sqlite3 does not have multi-threading
option enabled. But I want to stick with the system library as much as
possible. I am thinking to remedy the issue by doing a file lock
(i.e. flock) before calling the INSERTs. This will hurt performance a bit
but at least it will not corrupt the database, which we have seen it
happening a lot under load.

Any comments??

thanks

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Saving binary files

2007-03-19 Thread guenther
On Mon, 2007-03-19 at 01:46 +0200, Dimitris Servis wrote:
> 2007/3/19, guenther <[EMAIL PROTECTED]>:
> > On Sun, 2007-03-18 at 23:51 +0200, Dimitris Servis wrote:

> > > in my wildest dreams... if you read carefully, *each* file is about
> > > 100-200MB. I now end up wit ha collection of 100-200 of them and need to
> > > bundle in one file
> >
> > Yes, I did read carefully. 100 (source) files, each 100 MByte, stuffed
> > into a single (target, database) file results into that database file
> > being 100*100 MByte. Considering "possibly 200 or more", this easily
> > could result in a single 64+ GByte file.
> >
> > So, in what way was this meant to be a response regarding my
> > concerns? ;)

> In the sense that the legacy code produces files ~100MB. The collection is
> not legacy, that's what I am trying to setup. Unless I don't understand what
> you mean

Yes, so you got some legacy app. Which produces new files. And your
approach is to stick all these files into a single file. Fine. Now,
according to what you outlined, the "collection" file is going to be
huge. The question is, if your legacy(?) environment actually can handle
that huge collection file (the SQLite database file).

If you can not handle 40 GByte files, your approach will not work. If
you can not handle files larger than 64 GByte your approach is likely to
hit another wall soon.

Or, to put it in other words: Did you evaluate all existing limitations
other than "keep the legacy app"? Did you ever do a dry-run, before
starting to code the real project?

  guenther


-- 
char *t="[EMAIL PROTECTED]";
main(){ char h,m=h=*t++,*x=t+2*h,c,i,l=*x,s=0; for (i=0;i>=1)||!t[s+h]){ putchar(t[s]);h=m;s=0; }}}


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] API enhancement

2007-03-19 Thread Ken

 It should save some time. How much is questionable.
 
 Why would sqlite have to bind the Pointer bound variables? Isn't the strategy 
of binding to associate a variable with a statment? Why should I have to 
continually re-associate the bindings with a statement thats allready been 
prepared and bound, just to execute it again after a reset ?
 
 I guess I'm a bit confused, I'll look at the bind code in sqlite some more.
 
 
 

Scott Hess <[EMAIL PROTECTED]> wrote: I don't see how your modified version is 
any better than just putting
the sqlite3_bind_int() inside the loop.  You've superficially lifted
some code out of the loop, but sqlite3_step() is going to have to go
through and bind all of the "pointer bound" variables in your
suggested API, so it won't save you anything in the end.

-scott


On 3/19/07, ken-33  wrote:
>
> Anyone thoughts?
>
>
> ken-33 wrote:
> >
> > Question for the list,
> >
> >  I'd like to optimize my code, using the following pseudo code as an
> > example.
> >
> >  ===
> >  int i = 0 ;
> >  char str[20];
> >
> >  sqlite3_prepare_v2( "insert into t1 values (?,?)" )
> >  sqlite3_bind_int ( i )
> >  sqlite3_bind_text(str)
> >
> > BEGIN TRANSACTION
> >  For (i = 0; i < 10; i++) {
> >sqlite3_step (  );
> >sqlite3_reset( )
> >  }
> >   COMMIT TRANSACTION
> >  ==
> >
> >  However, the above code will fail to insert the values for i in the loop.
> > It will only insert the value 0, since that was the binding value...
> >
> >  An enhancement request would be to allow the user to bind the address to
> > the statement  objects. This would be a huge benefit from the standpoint
> > of fewer function calls to sqlite3_bind in the inside loop.
> >
> >  So maybe the following API:
> >
> >  sqlite3_pbind_int(sqlite3_stmt *, int, int * );
> >  sqlite3_pbind_int64(sqlite3_stmt *, int,  long long int * );
> >  sqlite3_pbind_double(sqlite3_stmt *, int, dobule *);
> >  sqlite3_pbind_text(sqlite3_stmt, int, const char *, int *,
> > void(*)(void*));
> >  notice the text takes a pointer to the length...
> >  sqlite3_pbind_blob(sqlite3_stmt*, int, const char *, int *,
> > void(*)(void*));
> >
> >  Concept of SQLITE_STATIC vs SQLITE_TRANSIENT is implied that all data is
> > SQLITE_STATIC.
> >
> >  Regards,
> >  Ken
> >
> >
> >
> >
> >
>
> --
> View this message in context: 
> http://www.nabble.com/API-enhancement-tf3405347.html#a9562311
> Sent from the SQLite mailing list archive at Nabble.com.
>
>
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
>
>

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] API enhancement

2007-03-19 Thread drh
ken-33 <[EMAIL PROTECTED]> wrote:
> Anyone thoughts?
> 

I would rather not complicate the existing API 
unnecessarily by add bells and whistles that can
be easily implemented using simple by wrappers.

> 
> ken-33 wrote:
> > 
> > Question for the list,
> >  
> >  I'd like to optimize my code, using the following pseudo code as an
> > example.
> >  
> >  ===
> >  int i = 0 ;
> >  char str[20];
> >  
> >  sqlite3_prepare_v2( "insert into t1 values (?,?)" )
> >  sqlite3_bind_int ( i )
> >  sqlite3_bind_text(str)
> >  
> > BEGIN TRANSACTION
> >  For (i = 0; i < 10; i++) {
> >sqlite3_step (  );
> >sqlite3_reset( )
> >  }
> >   COMMIT TRANSACTION
> >  ==
> >  
> >  However, the above code will fail to insert the values for i in the loop.
> > It will only insert the value 0, since that was the binding value...
> >  
> >  An enhancement request would be to allow the user to bind the address to
> > the statement  objects. This would be a huge benefit from the standpoint
> > of fewer function calls to sqlite3_bind in the inside loop.
> >  
> >  So maybe the following API:
> >  
> >  sqlite3_pbind_int(sqlite3_stmt *, int, int * );
> >  sqlite3_pbind_int64(sqlite3_stmt *, int,  long long int * );
> >  sqlite3_pbind_double(sqlite3_stmt *, int, dobule *);
> >  sqlite3_pbind_text(sqlite3_stmt, int, const char *, int *,
> > void(*)(void*));
> >  notice the text takes a pointer to the length...
> >  sqlite3_pbind_blob(sqlite3_stmt*, int, const char *, int *,
> > void(*)(void*));
> >  
> >  Concept of SQLITE_STATIC vs SQLITE_TRANSIENT is implied that all data is
> > SQLITE_STATIC.
> >  
> >  Regards,
> >  Ken
> >  
> >  
> > 
> > 
> > 
> 
> -- 
> View this message in context: 
> http://www.nabble.com/API-enhancement-tf3405347.html#a9562311
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> .



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] API enhancement

2007-03-19 Thread ken-33

Anyone thoughts?


ken-33 wrote:
> 
> Question for the list,
>  
>  I'd like to optimize my code, using the following pseudo code as an
> example.
>  
>  ===
>  int i = 0 ;
>  char str[20];
>  
>  sqlite3_prepare_v2( "insert into t1 values (?,?)" )
>  sqlite3_bind_int ( i )
>  sqlite3_bind_text(str)
>  
> BEGIN TRANSACTION
>  For (i = 0; i < 10; i++) {
>sqlite3_step (  );
>sqlite3_reset( )
>  }
>   COMMIT TRANSACTION
>  ==
>  
>  However, the above code will fail to insert the values for i in the loop.
> It will only insert the value 0, since that was the binding value...
>  
>  An enhancement request would be to allow the user to bind the address to
> the statement  objects. This would be a huge benefit from the standpoint
> of fewer function calls to sqlite3_bind in the inside loop.
>  
>  So maybe the following API:
>  
>  sqlite3_pbind_int(sqlite3_stmt *, int, int * );
>  sqlite3_pbind_int64(sqlite3_stmt *, int,  long long int * );
>  sqlite3_pbind_double(sqlite3_stmt *, int, dobule *);
>  sqlite3_pbind_text(sqlite3_stmt, int, const char *, int *,
> void(*)(void*));
>  notice the text takes a pointer to the length...
>  sqlite3_pbind_blob(sqlite3_stmt*, int, const char *, int *,
> void(*)(void*));
>  
>  Concept of SQLITE_STATIC vs SQLITE_TRANSIENT is implied that all data is
> SQLITE_STATIC.
>  
>  Regards,
>  Ken
>  
>  
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/API-enhancement-tf3405347.html#a9562311
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] API enhancement

2007-03-19 Thread Scott Hess

I don't see how your modified version is any better than just putting
the sqlite3_bind_int() inside the loop.  You've superficially lifted
some code out of the loop, but sqlite3_step() is going to have to go
through and bind all of the "pointer bound" variables in your
suggested API, so it won't save you anything in the end.

-scott


On 3/19/07, ken-33 <[EMAIL PROTECTED]> wrote:


Anyone thoughts?


ken-33 wrote:
>
> Question for the list,
>
>  I'd like to optimize my code, using the following pseudo code as an
> example.
>
>  ===
>  int i = 0 ;
>  char str[20];
>
>  sqlite3_prepare_v2( "insert into t1 values (?,?)" )
>  sqlite3_bind_int ( i )
>  sqlite3_bind_text(str)
>
> BEGIN TRANSACTION
>  For (i = 0; i < 10; i++) {
>sqlite3_step (  );
>sqlite3_reset( )
>  }
>   COMMIT TRANSACTION
>  ==
>
>  However, the above code will fail to insert the values for i in the loop.
> It will only insert the value 0, since that was the binding value...
>
>  An enhancement request would be to allow the user to bind the address to
> the statement  objects. This would be a huge benefit from the standpoint
> of fewer function calls to sqlite3_bind in the inside loop.
>
>  So maybe the following API:
>
>  sqlite3_pbind_int(sqlite3_stmt *, int, int * );
>  sqlite3_pbind_int64(sqlite3_stmt *, int,  long long int * );
>  sqlite3_pbind_double(sqlite3_stmt *, int, dobule *);
>  sqlite3_pbind_text(sqlite3_stmt, int, const char *, int *,
> void(*)(void*));
>  notice the text takes a pointer to the length...
>  sqlite3_pbind_blob(sqlite3_stmt*, int, const char *, int *,
> void(*)(void*));
>
>  Concept of SQLITE_STATIC vs SQLITE_TRANSIENT is implied that all data is
> SQLITE_STATIC.
>
>  Regards,
>  Ken
>
>
>
>
>

--
View this message in context: 
http://www.nabble.com/API-enhancement-tf3405347.html#a9562311
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Dennis Cote

Hubertus wrote:

Well now, if this isn't something!
fist I wann thank you for your quick reply. Finally I have to wait for my result
satisfying 0.6 sec. This is great. Now I can add lots more data.
What I did:
- I added a second index for the column "campId" and did the analyze trick
  (I had this column in the index before, but that time it was quicker without 
  that column).

  This already improved the waiting time from the former best 3 sec to 1 sec
- I increased the page_size to 4096, this decresed the size of the
  database from 650 Mb to 450.
- And at last I replaced the -.99 values with NULL (don't think about it. I 
  was asked to do this as missing value, now I found the .nullvalue...)
  This again decreased the size to stunning 165 Mb!! and improved the
  query time to even better 0.6 sec.

To Dennis:
I'm afraid I haven't quite understood the quote thing. 
First how can I do a query like

select "42" from data where campId='stream94' and "14">-;
from my shell?
Secondondly usually I use python or R to access the data where I do somthing 
like

INSERT = 'SELECT "%i" FROM data where campId="%s"' % col, campId

query <- paste('SELECT "34" AS "N2O_TDL", "29" AS "O3"',
  'FROM data where campId="polstar97"')
rs <- dbSendQuery(con, statement = query)

How is this done correctly?

Thanks a lot

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-


  

Hubertus,

There are a couple of issues involved here. First, standard SQL syntax 
uses single quote to delimit literal values and double quotes to delimit 
quoted identifiers (usually used for identifiers that are keywords or 
contain special characters like space). Second you are using both types 
of quoting in your SQL query. Third, the bash shell uses both single and 
double quotes to delimit strings with or without variable substitution 
applied. And finally, Python uses both types of quotes (and several 
other) for string literals.


In python you can use a triple quote to delimit a string that contains 
other quotes to treat them as literal quotes. Your query can be done 
like this.


>>> print '''select "%i" from data where campId='%s';''' % (14, 'polestar')
select "14" from data where campId='polestar';

or you can use a backslash to escape the quotes used to delimit the 
string like this.


>>> print 'select "%i" from data where campId=\'%s\';' % (14, 'polestar')
select "14" from data where campId='polestar';

In the bash shell single quotes are used where command and variable 
substitution are not desired, and single quotes can not appear in a 
single quoted string (even with a backslash escape). Double quotes allow 
command and variable substitution and escaping of literal characters. To 
prepare a command that includes both types of quotes you need to use 
double quotes as the outer delimiters and then escape any double quotes 
in the string using a backslash.


sqlite3 mydb "select \"14\" from data where campId='polestar';"

HTH
Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] API enhancement

2007-03-19 Thread Ken
Anyone

Ken <[EMAIL PROTECTED]> wrote: Question for the list,
 
 I'd like to optimize my code, using the following pseudo code as an example.
 
 ===
 int i = 0 ;
 char str[20];
 
 sqlite3_prepare_v2( "insert into t1 values (?,?)" )
 sqlite3_bind_int ( i )
 sqlite3_bind_text(str)
 
BEGIN TRANSACTION
 For (i = 0; i < 10; i++) {
   sqlite3_step (  );
   sqlite3_reset( )
 }
  COMMIT TRANSACTION
 ==
 
 However, the above code will fail to insert the values for i in the loop. It 
will only insert the value 0, since that was the binding value...
 
 An enhancement request would be to allow the user to bind the address to the 
statement objects. This would be a huge benefit from the standpoint of fewer 
function calls to sqlite3_bind in the inside loop.
 
 So maybe the following API:
 
 sqlite3_pbind_int(sqlite3_stmt *, int, int * );
 sqlite3_pbind_int64(sqlite3_stmt *, int,  long long int * );
 sqlite3_pbind_double(sqlite3_stmt *, int, dobule *);
 sqlite3_pbind_text(sqlite3_stmt, int, const char *, int *, void(*)(void*));
 notice the text takes a pointer to the length...
 sqlite3_pbind_blob(sqlite3_stmt*, int, const char *, int *, void(*)(void*));
 
 Concept of SQLITE_STATIC vs SQLITE_TRANSIENT is implied that all data is 
SQLITE_STATIC.
 
 Regards,
 Ken
 
 




Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread Alberto Simões

> Would it also be acceptable (assuming you want the protection offered
> by the temporary tables) to do ... "PRAGMA temp_store = MEMORY"  on
> the database?
>

The temp_store pragma only effects the placement of temporary
tables.  But the rollback journal is not a table.  The
temp_store pragma has no effect on the placement of rollback
journals.  Rollback journals always go to disk.


I am not sure of the side-effects it would lead to, but I would love
to see a "PRAGMA journal = NO".

Cheers
--
Alberto Simões

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread Siebe Warners

[EMAIL PROTECTED] wrote:

Siebe Warners <[EMAIL PROTECTED]> wrote:
  

Hi all,

I'm using sqlite on an embedded system with flash EEPROM as the medium. 
Performance of the flash is not too good, so i'm trying to avoid file 
system access where possible without running risk of database corruption.
So the database file and the journal files need to be written to the 
flash, but i also noticed temp files being written to disk. I'm using 
sqlite 3.2.7.


When i perform a sequence of commands like:
BEGIN;
UPDATE t SET col="val1" WHERE key="key1";
UPDATE t SET col="val2" WHERE key="key2";
UPDATE t SET col="val3" WHERE key="key3";
.
UPDATE t SET col="valx" WHERE key="keyx";
COMMIT;

Using strace i observe:
- the journal file is created at the start of the transaction
- a temp file is created at the start of the transaction
- the journal file is written at the start of the sequence and some more 
data is appended somewhere halfway

- the temp file is written at every UPDATE
- at the commit the journal file is written, the database file is 
updated, and journal file and temp file are removed.





You must have a UNIQUE or CHECK constraint on your "t" table.
The extra file being opened is a rollback journal for each
particular UPDATE statement.  This extra rollback journal is
needed in case you hit a UNIQUE or CHECK constraint half way
through the update and have to go back and undo those parts of
the UPDATE that are already done.

You can avoid this extra journal file by using UPDATE OR FAIL
instead of UPDATE.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


  
Thanks for the info, using UPDATE OR FAIL took some 50ms off from what 
originally was around 80ms.
The extra journals are no longer written, so this is a good way to avoid 
them.


Now i only need to make sure the OR FAIL clause has no unsuspected 
effects, but considering the (lack of) complexity of the sql used in our 
application the behaviour should be very similar to the default OR ABORT.


Thanks again,
A happy SQLite user


--
This message has been scanned for viruses and is believed to be clean


-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread drh
"Rich Rattanni" <[EMAIL PROTECTED]> wrote:
> On 3/19/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Siebe Warners <[EMAIL PROTECTED]> wrote:
> > >
> > > When i perform a sequence of commands like:
> > > BEGIN;
> > > UPDATE t SET col="val1" WHERE key="key1";
> > > .
> > > UPDATE t SET col="valx" WHERE key="keyx";
> > > COMMIT;
> > >
> > > Using strace i observe:
> > > - a temp file is created at the start of the transaction
> >
> > The extra file being opened is a rollback journal for each
> > particular UPDATE statement
> >
> > You can avoid this extra journal file by using UPDATE OR FAIL
> > instead of UPDATE.
> 
> Would it also be acceptable (assuming you want the protection offered
> by the temporary tables) to do ... "PRAGMA temp_store = MEMORY"  on
> the database?
> 

The temp_store pragma only effects the placement of temporary
tables.  But the rollback journal is not a table.  The
temp_store pragma has no effect on the placement of rollback
journals.  Rollback journals always go to disk.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread Rich Rattanni

On 3/19/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

Siebe Warners <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I'm using sqlite on an embedded system with flash EEPROM as the medium.
> Performance of the flash is not too good, so i'm trying to avoid file
> system access where possible without running risk of database corruption.
> So the database file and the journal files need to be written to the
> flash, but i also noticed temp files being written to disk. I'm using
> sqlite 3.2.7.
>
> When i perform a sequence of commands like:
> BEGIN;
> UPDATE t SET col="val1" WHERE key="key1";
> UPDATE t SET col="val2" WHERE key="key2";
> UPDATE t SET col="val3" WHERE key="key3";
> .
> UPDATE t SET col="valx" WHERE key="keyx";
> COMMIT;
>
> Using strace i observe:
> - the journal file is created at the start of the transaction
> - a temp file is created at the start of the transaction
> - the journal file is written at the start of the sequence and some more
> data is appended somewhere halfway
> - the temp file is written at every UPDATE
> - at the commit the journal file is written, the database file is
> updated, and journal file and temp file are removed.
>

You must have a UNIQUE or CHECK constraint on your "t" table.
The extra file being opened is a rollback journal for each
particular UPDATE statement.  This extra rollback journal is
needed in case you hit a UNIQUE or CHECK constraint half way
through the update and have to go back and undo those parts of
the UPDATE that are already done.

You can avoid this extra journal file by using UPDATE OR FAIL
instead of UPDATE.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Would it also be acceptable (assuming you want the protection offered
by the temporary tables) to do ... "PRAGMA temp_store = MEMORY"  on
the database?

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread John Stanton

Hubertus wrote:

Dear John,

You might also look at using the Sqlite date format rather than seperate 
columns for year, month etc.


That was what I considered first. The problem was, that other people
are also supposed to use this datbase. Some of them use Fortran and they
said that it's easier to compile the data in this format but in the
prefered current_date. I couldn't find a way to do the splitting in day,
month, year with sqlite, so I choose this, admittedly not very nice,
implementation. I think even Fortran should know something like subset or split,
but...
Can this splitting be done by sqlite?

Thank

Hubertus

Sqlite has a set of built in date/time functions which will transform 
the internal format according to your requirement.  It uses an offset 
from an epoch, the standard way to handle dates and times.


The big advantage of the single date/time value is with searching. 
There is just a single comparison of a REAL instead of a complex expression.


You could look at using an ISO standard time format like 8601 to 
communicate with other users.  A VIEW would output you date and time in 
that format.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Ken
Did you ever determine the cardinality of the campID field? I'm guessing its 
pretty good since your query is now .6 seconds.
 
 Lets say your cardinality was low, ie say less than .3 (arbitrary number). 
Then using the index to perform you data lookups would probably be slower than 
just reading the entire file. In this case the index would actually hurt rather 
than help. 
 
 

Hubertus <[EMAIL PROTECTED]> wrote: Well now, if this isn't something!
fist I wann thank you for your quick reply. Finally I have to wait for my result
satisfying 0.6 sec. This is great. Now I can add lots more data.
What I did:
- I added a second index for the column "campId" and did the analyze trick
  (I had this column in the index before, but that time it was quicker without 
  that column).
  This already improved the waiting time from the former best 3 sec to 1 sec
- I increased the page_size to 4096, this decresed the size of the
  database from 650 Mb to 450.
- And at last I replaced the -.99 values with NULL (don't think about it. I 
was asked to do this as missing value, now I found the .nullvalue...)
  This again decreased the size to stunning 165 Mb!! and improved the
  query time to even better 0.6 sec.

To Dennis:
I'm afraid I haven't quite understood the quote thing. 
First how can I do a query like
select "42" from data where campId='stream94' and "14">-;
from my shell?
Secondondly usually I use python or R to access the data where I do somthing 
like
INSERT = 'SELECT "%i" FROM data where campId="%s"' % col, campId

query <- paste('SELECT "34" AS "N2O_TDL", "29" AS "O3"',
  'FROM data where campId="polstar97"')
rs <- dbSendQuery(con, statement = query)

How is this done correctly?

Thanks a lot

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




Re: [sqlite] Strange performance behavior

2007-03-19 Thread Hubertus
Dear John,
> You might also look at using the Sqlite date format rather than seperate 
> columns for year, month etc.
That was what I considered first. The problem was, that other people
are also supposed to use this datbase. Some of them use Fortran and they
said that it's easier to compile the data in this format but in the
prefered current_date. I couldn't find a way to do the splitting in day,
month, year with sqlite, so I choose this, admittedly not very nice,
implementation. I think even Fortran should know something like subset or split,
but...
Can this splitting be done by sqlite?

Thank

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Hubertus
Well now, if this isn't something!
fist I wann thank you for your quick reply. Finally I have to wait for my result
satisfying 0.6 sec. This is great. Now I can add lots more data.
What I did:
- I added a second index for the column "campId" and did the analyze trick
  (I had this column in the index before, but that time it was quicker without 
  that column).
  This already improved the waiting time from the former best 3 sec to 1 sec
- I increased the page_size to 4096, this decresed the size of the
  database from 650 Mb to 450.
- And at last I replaced the -.99 values with NULL (don't think about it. I 
  was asked to do this as missing value, now I found the .nullvalue...)
  This again decreased the size to stunning 165 Mb!! and improved the
  query time to even better 0.6 sec.

To Dennis:
I'm afraid I haven't quite understood the quote thing. 
First how can I do a query like
select "42" from data where campId='stream94' and "14">-;
from my shell?
Secondondly usually I use python or R to access the data where I do somthing 
like
INSERT = 'SELECT "%i" FROM data where campId="%s"' % col, campId

query <- paste('SELECT "34" AS "N2O_TDL", "29" AS "O3"',
  'FROM data where campId="polstar97"')
rs <- dbSendQuery(con, statement = query)

How is this done correctly?

Thanks a lot

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread P Kishor

On 3/19/07, Hubertus <[EMAIL PROTECTED]> wrote:

Dear list,
sorry to just come up with another performance question. I build a yet small
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The
Laptop runs Suse 10.2 and does basicly nothing but this database.
sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
  'year' INTEGER,
  'month' INTEGER,
  'day' INTEGER,
  'sec' REAL,
  'campId' TEXT,
  'flightNr' INTEGER,
  '1' REAL,
  ...
  '71' REAL
  );
CREATE INDEX sec on data(year,month,day,sec);

I experience a big variability of time a query needs:


The index on the psuedo time fields is not being used at all. How
about an index on campId as well as on "14" (that is, if "14" is
always going to be in your query)?

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/
Open Source Geospatial Foundation http://www.osgeo.org/education/
-
collaborate, communicate, compete
=

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread John Stanton
First, you have your double and single quotes mixed up.  SQL uses single 
quotes for literals.


Second, you are performing row scans and not using any of your indices. 
 You will do better if you have an index on the column you specify in 
your search.


You might also look at using the Sqlite date format rather than seperate 
columns for year, month etc.


Hubertus wrote:

Dear list,
sorry to just come up with another performance question. I build a yet small 
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The 
Laptop runs Suse 10.2 and does basicly nothing but this database.

sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
  'year' INTEGER,
  'month' INTEGER,
  'day' INTEGER,
  'sec' REAL,
  'campId' TEXT,
  'flightNr' INTEGER,
  '1' REAL,
  ...
  '71' REAL
  );
CREATE INDEX sec on data(year,month,day,sec);
  
I experience a big variability of time a query needs:

~database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.115s
user0m1.748s
sys 0m1.368s
~/database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.139s
user0m1.756s
sys 0m1.380s
~/database> time sqlite3  data.db 'select "42" from data where campId="stream94" and 
"14">-;' >/dev/null
real0m50.227s
user0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the
time such simple queries take about 35 sec. Why is that so and what can be done?
I can live with 3 sec of response but not up to one minute and the
database is still not complete. Would pytables with hdf5 be an alternative? 
Knowing that this is probably not the right place to ask...


Tips, suggestions, recommendation are gratefuly appreciated! 
Thanks in advance


Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread drh
Siebe Warners <[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> I'm using sqlite on an embedded system with flash EEPROM as the medium. 
> Performance of the flash is not too good, so i'm trying to avoid file 
> system access where possible without running risk of database corruption.
> So the database file and the journal files need to be written to the 
> flash, but i also noticed temp files being written to disk. I'm using 
> sqlite 3.2.7.
> 
> When i perform a sequence of commands like:
> BEGIN;
> UPDATE t SET col="val1" WHERE key="key1";
> UPDATE t SET col="val2" WHERE key="key2";
> UPDATE t SET col="val3" WHERE key="key3";
> .
> UPDATE t SET col="valx" WHERE key="keyx";
> COMMIT;
> 
> Using strace i observe:
> - the journal file is created at the start of the transaction
> - a temp file is created at the start of the transaction
> - the journal file is written at the start of the sequence and some more 
> data is appended somewhere halfway
> - the temp file is written at every UPDATE
> - at the commit the journal file is written, the database file is 
> updated, and journal file and temp file are removed.
> 

You must have a UNIQUE or CHECK constraint on your "t" table.
The extra file being opened is a rollback journal for each
particular UPDATE statement.  This extra rollback journal is
needed in case you hit a UNIQUE or CHECK constraint half way
through the update and have to go back and undo those parts of
the UPDATE that are already done.

You can avoid this extra journal file by using UPDATE OR FAIL
instead of UPDATE.

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread Siebe Warners
I'll give 3.3.13 a try but it will be for debugging only for now. The 
project using the database is in the stage of avoiding all risk, so i'm 
not sure i will be able to convince the people to swap databases at this 
point. 
So were there any updates wrt transactions and temporary files?


Thanks,
Siebe

[EMAIL PROTECTED] wrote:

Siebe Warners <[EMAIL PROTECTED]> wrote:
  

I'm using sqlite 3.2.7.



Have you tried 3.3.13?

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-


  



--
This message has been scanned for viruses and is believed to be clean


-
To unsubscribe, send email to [EMAIL PROTECTED]
-

Re: [sqlite] Avoiding use of temporary files

2007-03-19 Thread drh
Siebe Warners <[EMAIL PROTECTED]> wrote:
> I'm using sqlite 3.2.7.

Have you tried 3.3.13?

--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Avoiding use of temporary files

2007-03-19 Thread Siebe Warners

Hi all,

I'm using sqlite on an embedded system with flash EEPROM as the medium. 
Performance of the flash is not too good, so i'm trying to avoid file 
system access where possible without running risk of database corruption.
So the database file and the journal files need to be written to the 
flash, but i also noticed temp files being written to disk. I'm using 
sqlite 3.2.7.


When i perform a sequence of commands like:
BEGIN;
UPDATE t SET col="val1" WHERE key="key1";
UPDATE t SET col="val2" WHERE key="key2";
UPDATE t SET col="val3" WHERE key="key3";

UPDATE t SET col="valx" WHERE key="keyx";
COMMIT;

Using strace i observe:
- the journal file is created at the start of the transaction
- a temp file is created at the start of the transaction
- the journal file is written at the start of the sequence and some more 
data is appended somewhere halfway

- the temp file is written at every UPDATE
- at the commit the journal file is written, the database file is 
updated, and journal file and temp file are removed.


For completeness the strace output of such a sequence is appended to 
this message.
I was surprised by this outcome as my expectation was that "disk" access 
could be limited by using a transaction. What i am trying to achieve is 
to eliminate the write actions to the temporary file in order to gain 
some performance.
Reading the documentation i was convinced that i should be able to use 
the temp_store pragma to influence writing of the temp file, but 
unfortunately that does not seem to work. Does anyone have an idea how i 
can get sqlite not to write the temporary file?


Thanks in advance,
Siebe

strace output:
--- 


Process 206 attached - interrupt to quit
--- SIGRTMIN (Unknown signal 32) @ 0 (0) ---
rt_sigsuspend([])   = 2116025440
sched_yield()   = 2116025440
write(1, "\n", 1)   = 2116025216
fcntl64(16, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, 
len=1}, 0x7e1ff450) = 16
fcntl64(16, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741826, 
len=510}, 0x7e1ff450) = 16
fcntl64(16, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=1073741824, 
len=1}, 0x7e1ff450) = 275817880

access("/var/db/addf.db-journal", F_OK) = 16
fstat64(16, {st_mode=S_IFREG|0644, st_size=113664, ...}) = 16
_llseek(16, 0, [0], SEEK_SET)   = 16
read(16, "SQLite format 3\0", 1024) = 16
_llseek(16, 69632, [69632], SEEK_SET)   = 16
read(16, "\5\0\0\0\2\3\366\0\0\0\0#\3\373\3\366", 1024) = 16
_llseek(16, 70656, [70656], SEEK_SET)   = 16
read(16, "\n\0\0\0?\0\216\0\3\362\3\344\3\326\3\310", 1024) = 16
_llseek(16, 30720, [30720], SEEK_SET)   = 16
read(16, "\r\0\0\0\31\0=\0\3\333\3\266\3\221\3l\3G\3\"\2\373\2\324"..., 
1024) = 2116024456
fcntl64(16, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=0, len=0}, 
0x7e1ff420) = 2116025256
fcntl64(16, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741824, 
len=1}, 0x7e1ff3f0) = 16
fcntl64(16, F_SETLK64, {type=F_RDLCK, whence=SEEK_SET, start=1073741826, 
len=510}, 0x7e1ff3f0) = 16
fcntl64(16, F_SETLK64, {type=F_UNLCK, whence=SEEK_SET, start=1073741824, 
len=1}, 0x7e1ff3f0) = 275817880

access("/var/db/addf.db-journal", F_OK) = 16
fstat64(16, {st_mode=S_IFREG|0644, st_size=113664, ...}) = 16
_llseek(16, 0, [0], SEEK_SET)   = 16
read(16, "SQLite format 3\0", 1024) = 16
_llseek(16, 70656, [70656], SEEK_SET)   = 16
read(16, "\n\0\0\0?\0\216\0\3\362\3\344\3\326\3\310\3\272\3\254\3"..., 
1024) = 2116024720
fcntl64(16, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=1073741825, 
len=1}, 0x7e1ff4f0) = 275817880

access("/var/db/addf.db-journal", F_OK) = 275817880
open("/var/db/addf.db-journal", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0644) 
= 29

fstat64(29, {st_mode=S_IFDIR|02041000250, st_size=16, ...}) = 275817864
open("/var/db", O_RDONLY|O_LARGEFILE)   = 29
_llseek(29, 0, [0], SEEK_SET)   = 29
write(29, "\331\325\5\371 \241c\327", 8) = 29
write(29, "\0\0\0\0", 4)= 29
write(29, "[EMAIL PROTECTED]", 4)   = 29
write(29, "\0\0\0o", 4) = 29
write(29, "\0\0\2\0", 4)= 29
_llseek(29, 511, [511], SEEK_SET)   = 29
write(29, "\0", 1)  = 805842344
stat64("/var/tmp", {st_mode=0, st_size=9088250304414123248, ...}) = 
805842356
stat64("/usr/tmp", {st_mode=0, st_size=9088250304414123248, ...}) = 
805842368

stat64("/tmp", {st_mode=S_IFDIR|0777, st_size=0, ...}) = 805842368
access("/tmp", R_OK|W_OK|X_OK)  = 2116023440
access("/tmp/sqlite_jT5odBG4x7ALEgD", F_OK) = 2116023440
access("/tmp/sqlite_jT5odBG4x7ALEgD", F_OK) = 2116023440
open("/tmp/sqlite_jT5odBG4x7ALEgD", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 
0644) = 31
fstat64(31, {st_mode=02040570200, st_size=9088250098785467100, ...}) = 
2116023440


Re: [sqlite] Strange performance behavior

2007-03-19 Thread Dennis Cote

Hubertus wrote:


This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
  'year' INTEGER,
  'month' INTEGER,
  'day' INTEGER,
  'sec' REAL,
  'campId' TEXT,
  'flightNr' INTEGER,
  '1' REAL,
  ...
  '71' REAL
  );
You should use double quotes around column and table names. Single 
quotes delimit literal strings in SQL.



CREATE INDEX sec on data(year,month,day,sec);

This index will not help with the queries you are testing.

  
I experience a big variability of time a query needs:

~database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.115s
user0m1.748s
sys 0m1.368s
~/database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.139s
user0m1.756s
sys 0m1.380s
~/database> time sqlite3  data.db 'select "42" from data where campId="stream94" and 
"14">-;' >/dev/null
real0m50.227s
user0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the
time such simple queries take about 35 sec. Why is that so and what can be done?
I can live with 3 sec of response but not up to one minute and the
database is still not complete. Would pytables with hdf5 be an alternative? 
Knowing that this is probably not the right place to ask...




You have your quotes mixed up in your queries as well.

   select "42" from data where campId="stream94" and "14">-;

should be

   select "42" from data where campId='stream94' and "14">-;

The value 'stream94' is a literal string and should be delimited with 
single quotes. You will need to escape those single quotes on the bash 
command line to get the correct query passed to sqlite.


Your query is scanning the entire table looking for records that match 
your conditions. I suspect that a lot of your variability is due to the 
relative number of records matching your first condition. To speed these 
searches you need an index on the columns you are searching.


   Create index dataCampId on data (campId);

Note that sqlite will only use a single index per table per query. In 
your case you are searching based on two columns. You can either create 
a compound index on both columns


   Create index dataCompound1 on data (campId, "14");

or create multiple indices on the individual columns and then use the 
analyze command to let sqlite gather the statistics that it needs to 
select the best index to use for a given query. The compound index 
approach will be the fastest if you use the same pair of columns as 
search criterion in most of your queries. If your search criteria vary 
you are probably best using single indexes and the analyze command.


HTH
Dennis Cote


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Christian Smith

Hubertus uttered:


Dear list,
sorry to just come up with another performance question. I build a yet small
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The
Laptop runs Suse 10.2 and does basicly nothing but this database.
sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
 'year' INTEGER,
 'month' INTEGER,
 'day' INTEGER,
 'sec' REAL,
 'campId' TEXT,
 'flightNr' INTEGER,
 '1' REAL,
 ...
 '71' REAL
 );
CREATE INDEX sec on data(year,month,day,sec);



What a nasty schema! What exactly do the '1'...'71' fields represent? Are 
they all used in each row? If not, you might be better off putting the 
data in a seperate table and joining the data.





I experience a big variability of time a query needs:
~database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.115s
user0m1.748s
sys 0m1.368s
~/database> time sqlite3  data.db 'select "14" from data where campId="polstar98" and 
"14">-;' >/dev/null
real0m3.139s
user0m1.756s
sys 0m1.380s
~/database> time sqlite3  data.db 'select "42" from data where campId="stream94" and 
"14">-;' >/dev/null
real0m50.227s
user0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the 
time such simple queries take about 35 sec. Why is that so and what can 
be done? I can live with 3 sec of response but not up to one minute and 
the database is still not complete. Would pytables with hdf5 be an 
alternative? Knowing that this is probably not the right place to ask...



The rows are probably quite big (each real value is 8 bytes), and would 
not fit in a single low level SQLite BTree cell, but instead overflow 
using an overflow page per row. As well as being inefficient for access of 
columns in the overflow page, it is also massively space inefficient, as 
the overflow page is not shared and most of it's space is probably wasted.





Tips, suggestions, recommendation are gratefuly appreciated!



If you can't change the schema, your best bet is to increase the page size 
of the database, which will hopefully allow you to keep entire rows 
together without using overflow pages. Create a new database, and use:

PRAGMA page_size=4096;

then import your existing data from your old database. Something like:

$ rm new.db
$ sqilte3 new.db
sqlite> PRAGMA page_size=4096;
sqlite> ATTACH 'old.db' AS old;
sqlite> CREATE TABLE data AS SELECT * FROM old.data;



Thanks in advance

Hubertus



Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Strange performance behavior

2007-03-19 Thread Ken
Looks like it is going to do a full scan of the entire database to complete 
that querry based upon your where clause.
   
  Are you always accessing the data by campID? What is the cardinality of 
campId data?
  Depending upon that it might be worth while putting and index on CampID.
  
Hubertus <[EMAIL PROTECTED]> wrote:
  Dear list,
sorry to just come up with another performance question. I build a yet small 
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The 
Laptop runs Suse 10.2 and does basicly nothing but this database.
sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
'year' INTEGER,
'month' INTEGER,
'day' INTEGER,
'sec' REAL,
'campId' TEXT,
'flightNr' INTEGER,
'1' REAL,
...
'71' REAL
);
CREATE INDEX sec on data(year,month,day,sec);

I experience a big variability of time a query needs:
~database> time sqlite3 data.db 'select "14" from data where campId="polstar98" 
and "14">-;' >/dev/null
real 0m3.115s
user 0m1.748s
sys 0m1.368s
~/database> time sqlite3 data.db 'select "14" from data where 
campId="polstar98" and "14">-;' >/dev/null
real 0m3.139s
user 0m1.756s
sys 0m1.380s
~/database> time sqlite3 data.db 'select "42" from data where campId="stream94" 
and "14">-;' >/dev/null
real 0m50.227s
user 0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the
time such simple queries take about 35 sec. Why is that so and what can be done?
I can live with 3 sec of response but not up to one minute and the
database is still not complete. Would pytables with hdf5 be an alternative? 
Knowing that this is probably not the right place to ask...

Tips, suggestions, recommendation are gratefuly appreciated! 
Thanks in advance

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




[sqlite] Strange performance behavior

2007-03-19 Thread Hubertus
Dear list,
sorry to just come up with another performance question. I build a yet small 
database with one table. It has about 650.000 rows, 75 columns and
has at the moment about 650 Mb. It runs on a Intel Pentium M with 2 GHz. The 
Laptop runs Suse 10.2 and does basicly nothing but this database.
sqlite3 -version is 3.3.8

This is the schema:
CREATE TABLE 'data'('nr' INTEGER PRIMARY KEY,
  'year' INTEGER,
  'month' INTEGER,
  'day' INTEGER,
  'sec' REAL,
  'campId' TEXT,
  'flightNr' INTEGER,
  '1' REAL,
  ...
  '71' REAL
  );
CREATE INDEX sec on data(year,month,day,sec);
  
I experience a big variability of time a query needs:
~database> time sqlite3  data.db 'select "14" from data where 
campId="polstar98" and "14">-;' >/dev/null
real0m3.115s
user0m1.748s
sys 0m1.368s
~/database> time sqlite3  data.db 'select "14" from data where 
campId="polstar98" and "14">-;' >/dev/null
real0m3.139s
user0m1.756s
sys 0m1.380s
~/database> time sqlite3  data.db 'select "42" from data where 
campId="stream94" and "14">-;' >/dev/null
real0m50.227s
user0m4.692s
sys 0m3.028s

I've tried it with and without index and at different times. Most of the
time such simple queries take about 35 sec. Why is that so and what can be done?
I can live with 3 sec of response but not up to one minute and the
database is still not complete. Would pytables with hdf5 be an alternative? 
Knowing that this is probably not the right place to ask...

Tips, suggestions, recommendation are gratefuly appreciated! 
Thanks in advance

Hubertus

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: 2 questions concerning select statement

2007-03-19 Thread Igor Tandetnik

Rafi Cohen <[EMAIL PROTECTED]> wrote:

1. When I prepare a select statement for later execution, may I
use a question mark instead of a table and later "bind" different
table names


No. You can only use a parameter where an expression would be valid.


2. In "order by" clause may I use a column which is part of the table
but nor part of the result?


Yes. In fact, you can use any expression, not just a column name.

Igor Tandetnik

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] 2 questions concerning select statement

2007-03-19 Thread Rafi Cohen
Hi, 1. When I prepare a select statement for later execution, may I use
a question mark instead of a table and later "bind" different table
names with the same cxonstruct according rto some condition?
for example, sqlite3_prepare_v2(db, "select * from ? where.",...);
2. In "order by" clause may I use a column which is part of the table
but nor part of the result?
select com1, col2 from tbl order by col1, col3;
col3 is part of the table but not of the result.
Thanks, Rafi.


Re: [sqlite] Saving binary files

2007-03-19 Thread Dimitris Servis

Hello John,

this is extremely helpful. Thanks a lot!!!

Dimitris