Hello Simon,
> Sounds like your disks/layout/something is pretty sick. You don't
> mention I/O bandwidth, controller or RAID, so you should look more into
> those topics.
Well seen ! (as we say in France).
As I said to Gustavo, your last suspicion took me into a simple disk test: I've just copyed (time cp) the "aggregate" table files (4 Gb) from one disk to an another one: it takes 22 mn !(3 Mb/s).
That seems to demonstrate that Postgres is not the cause of this issue.
I've just untrusted to my system engineer the analysis of my disks...
> Setting shared_buffers that high will do you no good at all, as Richard
> suggests.
I (and my own tests) agree my you.

> You've got 1.5Gb of shared_buffers and > 2Gb data. In 8.0, the scan will
> hardly use the cache at all, nor will it ever, since the data is bigger
> than the cache. Notably, the scan of B should NOT spoil the cache for A
Are you sure of that ? Is Postgres able to say to OS: "don't use the cache for this query"?

> Priming the cache is quite hard...but not impossible.
> What will kill you on a shared_buffers that big is the bgwriter, which
> you should turn off by setting bgwriter_maxpages = 0
Is bgwriter concerned as my application applyes only SELECT ?

> On the other hand...just go for more RAM, as you suggest...but you
> should create a RAMdisk, rather than use too large
> shared_buffers....that way your data is always in RAM, rather than maybe
> in RAM.
I am not an Linux expert: Is it possible (and how) to create a RAMdisk ?

Thank a lot for your help !

Reply via email to