Sorry, I forgot to mention:

When we were building the test case, we ran a lot of experiments with 1 GB of shared buffers, and were taking a clear performance hit anytime the shared buffers seemed to hit the 1GB barrier. Increasing the shared buffer size to 2GB, improved performance significantly (since the query now "fits" in shared memory). This seems to contradict the conclusion that the problem is only a result of the way top is reporting.

Regards,

Michael A.

Tom Lane wrote:
Okay, I ran this with about 900MB of shared buffers (about as much as I
thought I could make it without descending into swap hell ...) and there
is no memory leak that I can see.  What I *do* see is that the process
size as reported by "top" quickly jumps to 900MB plus and then sits
there.  This is not a memory leak though, it is just a side effect of
the way "top" reports usage of shared memory.  Basically, a shared
buffer starts getting charged against a given process the first time
that process touches that buffer.  Your test case involves reading a lot
of blocks of pg_largeobject and that results in touching a lot of
buffers.

So basically I don't see a problem here.  If you are noticing a
performance issue in this area, it may indicate that you have
shared_buffers set too large, ie, using more RAM than the machine
can really afford to spare.  That leads to swapping which drives
performance down.

begin:vcard
fn:Michael Akinde
n:Akinde;Michael
org:Meteorologisk Institutt, Norge;IT
adr;quoted-printable:;;Gaustadall=C3=A9en 30D;Oslo;;0313;Norge
email;internet:[EMAIL PROTECTED]
tel;work:22963379
tel;cell:45885379
x-mozilla-html:FALSE
url:http://www.met.no
version:2.1
end:vcard

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to