Rajarshi Guha <[EMAIL PROTECTED]> wrote:
>
> Hi, relating to my previous queries on doing spatial searches on 10M  
> rows, it seems that most of my queries return within 2 minutes.  
> Generally this is not too bad, though faster is always better.
> 
> Interestingly, it appears that the CUBE index for the table in  
> question is about 3GB (the table itself is about 14GB). Not knowing  
> the details of the postgres internals, I assume that when a query  
> tries to use the index, it will need to read a 3GB file. Is this a  
> correct assumption?
> 
> In such a situation, is there a way to keep the index in memory? My  
> machine has 8GB installed and currently has about 7.4GB free RAM (64  
> bit linux 2.6.9)

Free or cached/buffered?  Your OS should be using most of that to
buffer disk blocks.

> A side effect of the size of the index is that if I do a query that  
> performs a seq scan (say using cube_distance) it takes longer than  
> when an index is used, but not significantly longer. And this is on a  
> 10M row table.
> 
> What strategies do people follow when the index becomes very big?

What version of PG are you using and what is your shared_buffers setting?

With 8G of RAM, you should start with shared_buffers around 2 - 3G, if
you're using a modern version of PG.  With that much shared memory, a
large portion of that index should stay in RAM, as long as it's being
used often enough that PG doesn't swap it for other data.

-- 
Bill Moran
http://www.potentialtech.com

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org/

Reply via email to