Andrew Sullivan wrote:
> On Tue, Oct 21, 2003 at 03:11:17PM -0600, scott.marlowe wrote:
> > I think where it makes sense is when you have something like a report 
> > server where the result sets may be huge, but the parellel load is load, 
> > i.e. 5 or 10 users tossing around 100 Meg or more at time.
> In our case, we were noticing that truss showed an unbelievable
> amount of time spent by the postmaster doing open() calls to the OS
> (this was on Solaris 7).  So we thought, "Let's try a 2G buffer
> size."  2G was more than enough to hold the entire data set under
> question.  Once the buffer started to fill, even plain SELECTs
> started taking a long time.  The buffer algorithm is just not that
> clever, was my conclusion.
> (Standard disclaimer: not a long, controlled test.  It's just a bit
> of gossip.)

I know this is an old email, but have you tested larger shared buffers
in CVS HEAD with Jan's new cache replacement policy?

  Bruce Momjian                        |
  [EMAIL PROTECTED]               |  (610) 359-1001
  +  If your life is a hard drive,     |  13 Roberts Road
  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073

---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]

Reply via email to