On Tue, 2005-03-22 at 19:08 +0100, Patrick Vedrines wrote:
> I have 2 databases (A and B) with exactly the same schemas: 
> -one main table called "aggregate" having no indexes and supporting
> only SELECT statements (loaded one time a month with a new bundle of
> datas). Row size # 200 bytes (50 columns of type char(x) or integer) 
> -and several small 'reference' tables not shown by the following
> example for clarity reasons.
> -Database A : aggregate contains 2,300,000 records ( 500 Mb)
> -Database B : aggregate contains 9,000,000 records ( 2 Gb)

> (For example : shared_buffers = 190000 , sort_mem = 4096 ,
> effective_cache_size = 37000 and kernel/shmmax=1200000000 )
> Do I have to upgrade the RAM to 6Gb or/and buy faster HD (of what
> type?) ?

Setting shared_buffers that high will do you no good at all, as Richard

You've got 1.5Gb of shared_buffers and > 2Gb data. In 8.0, the scan will
hardly use the cache at all, nor will it ever, since the data is bigger
than the cache. Notably, the scan of B should NOT spoil the cache for

Priming the cache is quite hard...but not impossible.

What will kill you on a shared_buffers that big is the bgwriter, which
you should turn off by setting bgwriter_maxpages = 0

> PS (maybe of interest for some users like me) : 
> I created a partition on a new similar disk but on the last cylinders
> (near the periphery) and copied the database B into it: the response
> time is 25% faster (i.e. 15mn instead of 21mn). But 15 mn is still too
> long for my customers (5 mn would be nice).

Sounds like your disks/layout/something is pretty sick. You don't
mention I/O bandwidth, controller or RAID, so you should look more into
those topics.

On the other hand...just go for more RAM, as you suggest...but you
should create a RAMdisk, rather than use too large
shared_buffers....that way your data is always in RAM, rather than maybe
in RAM.

Best Regards, Simon Riggs

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to