On Mon, Feb 27, 2012 at 4:03 AM, Simon Riggs si...@2ndquadrant.com wrote:
So please use a scale factor that the hardware can cope with.
OK. I tested this out on Nate Boley's 32-core AMD machine, using
scale factor 100 and scale factor 300. I initialized it with Simon's
patch, which should have
On Tue, Feb 28, 2012 at 6:11 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Feb 27, 2012 at 4:03 AM, Simon Riggs si...@2ndquadrant.com wrote:
So please use a scale factor that the hardware can cope with.
OK. I tested this out on Nate Boley's 32-core AMD machine, using
scale factor 100
On Sun, Feb 26, 2012 at 10:53 PM, Robert Haas robertmh...@gmail.com wrote:
On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas robertmh...@gmail.com wrote:
Given that, I obviously cannot test this at this point,
Patch with
On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas robertmh...@gmail.com wrote:
Given that, I obviously cannot test this at this point,
Patch with minor corrections attached here for further review.
All right, I will set up
On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas robertmh...@gmail.com wrote:
Given that, I obviously cannot test this at this point,
Patch with minor corrections attached here for further review.
but let me go
ahead and theorize about how well it's likely to work. What Tom
suggested before
On Feb 9, 2012 1:27 AM, Robert Haas robertmh...@gmail.com
However, there is a potential fly in the ointment: in other cases in
which we've reduced contention at the LWLock layer, we've ended up
with very nasty contention at the spinlock layer that can sometimes
eat more CPU time than the
On Fri, Feb 10, 2012 at 7:01 PM, Ants Aasma ants.aa...@eesti.ee wrote:
On Feb 9, 2012 1:27 AM, Robert Haas robertmh...@gmail.com
However, there is a potential fly in the ointment: in other cases in
which we've reduced contention at the LWLock layer, we've ended up
with very nasty contention
On Sun, Jan 29, 2012 at 6:04 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Sun, Jan 29, 2012 at 9:41 PM, Jeff Janes jeff.ja...@gmail.com wrote:
If I cast to a int, then I see advancement:
I'll initialise it as 0, rather than -1 and then we don't have a
problem in any circumstance.
I've
On Mon, Jan 30, 2012 at 12:24 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 27, 2012 at 8:21 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Jan 27, 2012 at 8:21 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Also, I think the general approach is wrong. The only reason to have
these
On Sat, Jan 28, 2012 at 1:52 PM, Simon Riggs si...@2ndquadrant.com wrote:
Also, I think the general approach is wrong. The only reason to have
these pages in shared memory is that we can control access to them to
prevent write/write and read/write corruption. Since these pages are
never
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, it was. Sorry about that. New version attached, retesting while
you read this.
In my hands I could never get this patch to do anything. The
On Sun, Jan 29, 2012 at 12:18 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, it was. Sorry about that. New version attached, retesting while
On Sun, Jan 29, 2012 at 1:41 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sun, Jan 29, 2012 at 12:18 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Sun, Jan 29, 2012 at 9:41 PM, Jeff Janes jeff.ja...@gmail.com wrote:
If I cast to a int, then I see advancement:
I'll initialise it as 0, rather than -1 and then we don't have a
problem in any circumstance.
I've specifically designed the pgbench changes required to simulate
conditions of
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, it was. Sorry about that. New version attached, retesting while
you read this.
In my hands I could never get this patch to do anything. The
On Thu, Jan 12, 2012 at 4:49 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, Jan 5, 2012 at 6:26 PM, Simon Riggs si...@2ndquadrant.com wrote:
Patch to remove clog contention caused by dirty clog LRU.
v2, minor changes, updated for recent commits
This no longer applies to file
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs si...@2ndquadrant.com wrote:
Yes, it was. Sorry about that. New version attached, retesting while
you read this.
In my hands I could never get this patch to do anything. The new
cache was never used.
I think that that was because RecentXminPageno
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Also, I think the general approach is wrong. The only reason to have
these pages in shared memory is that we can control access to them to
prevent write/write and read/write corruption. Since these pages are
never
On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes jeff.ja...@gmail.com wrote:
Also, I think the general approach is wrong. The only reason to have
these pages in shared memory is that we can control access to them to
prevent
On Fri, Jan 20, 2012 at 6:44 AM, Simon Riggs si...@2ndquadrant.com wrote:
OT: It would save lots of time if we had 2 things for the CF app:
..
2. Something that automatically tests patches. If you submit a patch
we run up a blank VM and run patch applies on all patches. As soon as
we get a
On Fri, Jan 20, 2012 at 10:44 AM, Robert Haas robertmh...@gmail.com wrote:
D'oh. You're right. Looks like I accidentally tried to apply this to
the 9.1 sources. Sigh...
No worries. It's Friday.
Server passed 'make check' with this patch, but when I tried to fire
it up for some test runs,
On Sat, Jan 21, 2012 at 1:57 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 20, 2012 at 10:44 AM, Robert Haas robertmh...@gmail.com wrote:
D'oh. You're right. Looks like I accidentally tried to apply this to
the 9.1 sources. Sigh...
No worries. It's Friday.
Server passed 'make
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as ClogHistory which allows access to the read-only tail of pages in
the clog. Once a page has been written to for the last time, it will
be accessed
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as ClogHistory which allows access to the read-only tail of pages in
the clog.
On Fri, Jan 20, 2012 at 9:44 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as
On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as
On Fri, Jan 20, 2012 at 3:32 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 10:38 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 3:32 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as ClogHistory which allows access to the read-only tail of pages in
the clog.
On Sun, Jan 8, 2012 at 2:25 PM, Simon Riggs si...@2ndquadrant.com wrote:
I've taken that idea and used it to build a second Clog cache, known
as ClogHistory which allows access to the read-only tail of pages in
the clog. Once a page has been written to for the last time, it will
be accessed
On Thu, Jan 5, 2012 at 6:26 PM, Simon Riggs si...@2ndquadrant.com wrote:
Patch to remove clog contention caused by dirty clog LRU.
v2, minor changes, updated for recent commits
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training
On Thu, Jan 5, 2012 at 10:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I would be in favor of that, or perhaps some other formula (eg, maybe
the minimum should be less than 8 for when
On Thu, Jan 5, 2012 at 5:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I would be in favor of that, or perhaps some other formula (eg, maybe
the minimum should be less than 8 for when
Simon Riggs si...@2ndquadrant.com writes:
Please can we either make it user configurable?
Weren't you just complaining that *I* was overcomplicating things?
I see no evidence to justify inventing a user-visible GUC here.
We have rough consensus on both the need for and the shape of a formula,
Robert Haas robertmh...@gmail.com writes:
After thinking about this a bit, I think the problem is that the
divisor we picked is still too high. Suppose we set num_clog_buffers
= (shared_buffers / 4MB), with a minimum of 4 and maximum of 32.
Works for me.
regards, tom
On Fri, Jan 6, 2012 at 3:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Please can we either make it user configurable?
Weren't you just complaining that *I* was overcomplicating things?
I see no evidence to justify inventing a user-visible GUC here.
We
On Fri, Jan 6, 2012 at 11:05 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
After thinking about this a bit, I think the problem is that the
divisor we picked is still too high. Suppose we set num_clog_buffers
= (shared_buffers / 4MB), with a minimum of 4 and
On Tue, Dec 27, 2011 at 5:23 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Sat, Dec 24, 2011 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas robertmh...@gmail.com wrote:
Also, if it is that, what do we do about it? I don't think any of the
On Thu, Jan 5, 2012 at 4:04 PM, Robert Haas robertmh...@gmail.com wrote:
It appears to me that increasing the number of CLOG buffers reduced
the severity of the latency spikes considerably. In the last 100
seconds, for example, master has several spikes in the 500-700ms
range, but with 32
Simon Riggs si...@2ndquadrant.com wrote:
Robert Haas robertmh...@gmail.com wrote:
So it seems that at least on this machine, increasing the number
of CLOG buffers both improves performance and reduces latency.
I believed before that the increase was worthwhile and now even
more so.
On Thu, Jan 5, 2012 at 4:04 PM, Robert Haas robertmh...@gmail.com wrote:
I hypothesize that there are actually two kinds of latency spikes
here. Just taking a wild guess, I wonder if the *remaining* latency
spikes are caused by the effect that you mentioned before: namely, the
need to write
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we need to at least figure out a
way to provide an escape hatch for people without much shared memory.
We could do that, perhaps, by using a formula like
On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we need to at least figure out a
way to provide an escape hatch for people
On Thu, Jan 5, 2012 at 2:21 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we
On Thu, Jan 5, 2012 at 1:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we need to at least figure out a
way to provide an escape hatch for people
On Thu, Jan 5, 2012 at 7:26 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 2:21 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Excerpts from Simon Riggs's message of jue ene 05 16:21:31 -0300 2012:
On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we
Robert Haas robertmh...@gmail.com wrote:
Simon Riggs si...@2ndquadrant.com wrote:
Robert Haas robertmh...@gmail.com wrote:
Simon Riggs si...@2ndquadrant.com wrote:
Let's commit the change to 32.
I would like to do that, but I think we need to at least figure
out a way to provide an escape
Robert Haas robertmh...@gmail.com writes:
I would like to do that, but I think we need to at least figure out a
way to provide an escape hatch for people without much shared memory.
We could do that, perhaps, by using a formula like this:
1 CLOG buffer per 128MB of shared_buffers, with a
Simon Riggs si...@2ndquadrant.com writes:
On Thu, Jan 5, 2012 at 7:26 PM, Robert Haas robertmh...@gmail.com wrote:
On the other hand, I think there's a decent argument that he should
change his opinion, because 192kB of memory is not a lot. However,
what I mostly want is something that nobody
On Thu, Jan 5, 2012 at 7:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I think that the reason it's historically been a constant is that the
original coding took advantage of having a compile-time-constant number
of buffers --- but since we went over to the common SLRU infrastructure
for several
Simon Riggs si...@2ndquadrant.com writes:
Parameterised slru buffer sizes were proposed about for 8.3 and opposed by
you.
I guess we all reserve the right to change our minds...
When presented with new data, sure. Robert's results offer a reason to
worry about this, which we did not have
On Thu, Jan 5, 2012 at 2:44 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
If we go with such a formula, I think 32 MB would be a more
appropriate divisor than 128 MB. Even on very large machines where
32 CLOG buffers would be a clear win, we often can't go above 1 or 2
GB of
On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I would be in favor of that, or perhaps some other formula (eg, maybe
the minimum should be less than 8 for when you've got very little shmem).
I have some results that show that, under the right set of
circumstances, 8-32 is a
On Thu, Jan 5, 2012 at 2:25 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Jan 5, 2012 at 2:44 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
If we go with such a formula, I think 32 MB would be a more
appropriate divisor than 128 MB. Even on very large machines where
32 CLOG
Robert Haas robertmh...@gmail.com writes:
On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane t...@sss.pgh.pa.us wrote:
I would be in favor of that, or perhaps some other formula (eg, maybe
the minimum should be less than 8 for when you've got very little shmem).
I have some results that show that,
On Dec 20, 2011, at 11:29 PM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
So, what do we do about this? The obvious answer is increase
NUM_CLOG_BUFFERS, and I'm not sure that's a bad idea.
As you say, that's likely to hurt people running in small shared
memory. I too have
On Sat, Dec 24, 2011 at 9:25 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas robertmh...@gmail.com wrote:
Also, if it is that, what do we do about it? I don't think any of the
ideas proposed so far are going to help much.
If you don't like guessing,
On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas robertmh...@gmail.com wrote:
You mentioned latency so this morning I ran pgbench with -l and
graphed the output. There are latency spikes every few seconds. I'm
attaching the overall graph as well as the graph of the last 100
seconds, where the
On Thu, Dec 22, 2011 at 1:04 AM, Simon Riggs si...@2ndquadrant.com wrote:
I understand why you say that and take no offence. All I can say is
last time I has access to a good test rig and well structured
reporting and analysis I was able to see evidence of what I described
to you here.
I no
On Wed, Dec 21, 2011 at 5:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
... while the main buffer manager is
content with some loosey-goosey approximation of recency, the SLRU
code makes a fervent attempt at strict LRU (slightly compromised for
the sake
On Wed, Dec 21, 2011 at 12:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Oh btw, I haven't looked at that code recently, but I have a nasty
feeling that there are parts of it that assume that the number of
buffers it is managing is fairly small. Cranking up the number
might require more work than
On Wed, Dec 21, 2011 at 5:17 AM, Simon Riggs si...@2ndquadrant.com wrote:
With the increased performance we have now, I don't think increasing
that alone will be that useful since it doesn't solve all of the
problems and (I am told) likely increases lookup speed.
I have benchmarks showing that
Robert Haas robertmh...@gmail.com wrote:
Any thoughts on what makes most sense here? I find it fairly
tempting to just crank up NUM_CLOG_BUFFERS and call it good,
The only thought I have to add to discussion so far is that the need
to do anything may be reduced significantly by any work to
On Wed, Dec 21, 2011 at 10:51 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Robert Haas robertmh...@gmail.com wrote:
Any thoughts on what makes most sense here? I find it fairly
tempting to just crank up NUM_CLOG_BUFFERS and call it good,
The only thought I have to add to discussion
Excerpts from Robert Haas's message of mié dic 21 13:18:36 -0300 2011:
There may be workloads where that will help, but it's definitely not
going to cover all cases. Consider my trusty
pgbench-at-scale-factor-100 test case: since the working set fits
inside shared buffers, we're only
Robert Haas robertmh...@gmail.com writes:
I think there probably are some scalability limits to the current
implementation, but also I think we could probably increase the
current value modestly with something less than a total rewrite.
Linearly scanning the slot array won't scale
On Wed, Dec 21, 2011 at 3:28 PM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Dec 21, 2011 at 5:17 AM, Simon Riggs si...@2ndquadrant.com wrote:
With the increased performance we have now, I don't think increasing
that alone will be that useful since it doesn't solve all of the
problems and
On Wed, Dec 21, 2011 at 11:48 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Agreed, the question is whether 32 is enough to fix the problem for
anything except this one benchmark.
Right. My thought on that topic is that it depends on what you mean
by fix. It's clearly NOT possible to keep enough
Robert Haas robertmh...@gmail.com writes:
On Wed, Dec 21, 2011 at 11:48 AM, Tom Lane t...@sss.pgh.pa.us wrote:
I'm inclined to think that that specific arrangement wouldn't be good.
The normal access pattern for CLOG is, I believe, an exponentially
decaying probability-of-access for each page
On Wed, Dec 21, 2011 at 1:09 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It strikes me that one simple thing we could do is extend the current
heuristic that says pin the latest page. That is, pin the last K
pages into SLRU, and apply LRU or some other method across the rest.
If K is large enough,
On Wed, Dec 21, 2011 at 3:24 PM, Robert Haas robertmh...@gmail.com wrote:
I think there probably are some scalability limits to the current
implementation, but also I think we could probably increase the
current value modestly with something less than a total rewrite.
Linearly scanning the
On Wed, Dec 21, 2011 at 2:05 PM, Simon Riggs si...@2ndquadrant.com wrote:
On Wed, Dec 21, 2011 at 3:24 PM, Robert Haas robertmh...@gmail.com wrote:
I think there probably are some scalability limits to the current
implementation, but also I think we could probably increase the
current value
On Wed, Dec 21, 2011 at 12:48 PM, Robert Haas robertmh...@gmail.com wrote:
On the other hand, if we just want to avoid having more requests
simultaneously in flight than we have buffers, so that backends don't
need to wait for an available buffer before beginning their I/O, then
something on
On Wed, Dec 21, 2011 at 4:17 PM, Simon Riggs si...@2ndquadrant.com wrote:
Partitioning will give us more buffers and more LWlocks, to spread the
contention when we access the buffers. I use that word because its
what we call the technique already used in the buffer manager and lock
manager. If
On Thu, Dec 22, 2011 at 12:28 AM, Robert Haas robertmh...@gmail.com wrote:
But on the flip side, I feel like your discussion of the problems is a
bit hand-wavy. I think we need some real test cases that we can look
at and measure, not just an informal description of what we think is
Robert Haas robertmh...@gmail.com writes:
So, what do we do about this? The obvious answer is increase
NUM_CLOG_BUFFERS, and I'm not sure that's a bad idea.
As you say, that's likely to hurt people running in small shared
memory. I too have thought about merging the SLRU areas into the main
Robert Haas robertmh...@gmail.com writes:
... while the main buffer manager is
content with some loosey-goosey approximation of recency, the SLRU
code makes a fervent attempt at strict LRU (slightly compromised for
the sake of reduced locking in SimpleLruReadPage_Readonly).
Oh btw, I haven't
79 matches
Mail list logo