Did you ever determine the cardinality of the campID field? I'm guessing its 
pretty good since your query is now .6 seconds.
 
 Lets say your cardinality was low, ie say less than .3 (arbitrary number). 
Then using the index to perform you data lookups would probably be slower than 
just reading the entire file. In this case the index would actually hurt rather 
than help. 
 
 

Hubertus <[EMAIL PROTECTED]> wrote: Well now, if this isn't something!
fist I wann thank you for your quick reply. Finally I have to wait for my result
satisfying 0.6 sec. This is great. Now I can add lots more data.
What I did:
- I added a second index for the column "campId" and did the analyze trick
  (I had this column in the index before, but that time it was quicker without 
  that column).
  This already improved the waiting time from the former best 3 sec to 1 sec
- I increased the page_size to 4096, this decresed the size of the
  database from 650 Mb to 450.
- And at last I replaced the -9999.99 values with NULL (don't think about it. I 
was asked to do this as missing value, now I found the .nullvalue...)
  This again decreased the size to stunning 165 Mb!! and improved the
  query time to even better 0.6 sec.

To Dennis:
I'm afraid I haven't quite understood the quote thing. 
First how can I do a query like
select "42" from data where campId='stream94' and "14">-9999;
from my shell?
Secondondly usually I use python or R to access the data where I do somthing 
like
INSERT = 'SELECT "%i" FROM data where campId="%s"' % col, campId

query <- paste('SELECT "34" AS "N2O_TDL", "29" AS "O3"',
  'FROM data where campId="polstar97"')
rs <- dbSendQuery(con, statement = query)

How is this done correctly?

Thanks a lot

Hubertus

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------


Reply via email to