On Tue, Feb 28, 2012 at 6:11 PM, Robert Haas wrote:
> On Mon, Feb 27, 2012 at 4:03 AM, Simon Riggs wrote:
>> So please use a scale factor that the hardware can cope with.
>
> OK. I tested this out on Nate Boley's 32-core AMD machine, using
> scale factor 100 and scale factor 300. I initialized i
On Mon, Feb 27, 2012 at 4:03 AM, Simon Riggs wrote:
> So please use a scale factor that the hardware can cope with.
OK. I tested this out on Nate Boley's 32-core AMD machine, using
scale factor 100 and scale factor 300. I initialized it with Simon's
patch, which should have the effect of renderi
On Sun, Feb 26, 2012 at 10:53 PM, Robert Haas wrote:
> On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs wrote:
>> On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas wrote:
>>> Given that, I obviously cannot test this at this point,
>>
>> Patch with minor corrections attached here for further review.
>
> A
On Sat, Feb 25, 2012 at 2:16 PM, Simon Riggs wrote:
> On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas wrote:
>> Given that, I obviously cannot test this at this point,
>
> Patch with minor corrections attached here for further review.
All right, I will set up some benchmarks with this version, and
On Wed, Feb 8, 2012 at 11:26 PM, Robert Haas wrote:
> Given that, I obviously cannot test this at this point,
Patch with minor corrections attached here for further review.
> but let me go
> ahead and theorize about how well it's likely to work. What Tom
> suggested before (and after some refl
On Fri, Feb 10, 2012 at 7:01 PM, Ants Aasma wrote:
>
> On Feb 9, 2012 1:27 AM, "Robert Haas"
>
>> However, there is a potential fly in the ointment: in other cases in
>> which we've reduced contention at the LWLock layer, we've ended up
>> with very nasty contention at the spinlock layer that can
On Feb 9, 2012 1:27 AM, "Robert Haas"
> However, there is a potential fly in the ointment: in other cases in
> which we've reduced contention at the LWLock layer, we've ended up
> with very nasty contention at the spinlock layer that can sometimes
> eat more CPU time than the LWLock contention did
On Sun, Jan 29, 2012 at 6:04 PM, Simon Riggs wrote:
> On Sun, Jan 29, 2012 at 9:41 PM, Jeff Janes wrote:
>
>> If I cast to a int, then I see advancement:
>
> I'll initialise it as 0, rather than -1 and then we don't have a
> problem in any circumstance.
>
>
>>> I've specifically designed the pgbe
On Mon, Jan 30, 2012 at 12:24 PM, Robert Haas wrote:
> On Fri, Jan 27, 2012 at 8:21 PM, Jeff Janes wrote:
>> On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure wrote:
>>> On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes wrote:
Also, I think the general approach is wrong. The only reason to have
>
On Fri, Jan 27, 2012 at 8:21 PM, Jeff Janes wrote:
> On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure wrote:
>> On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes wrote:
>>> Also, I think the general approach is wrong. The only reason to have
>>> these pages in shared memory is that we can control acce
On Sun, Jan 29, 2012 at 9:41 PM, Jeff Janes wrote:
> If I cast to a int, then I see advancement:
I'll initialise it as 0, rather than -1 and then we don't have a
problem in any circumstance.
>> I've specifically designed the pgbench changes required to simulate
>> conditions of clog contention
On Sun, Jan 29, 2012 at 1:41 PM, Jeff Janes wrote:
> On Sun, Jan 29, 2012 at 12:18 PM, Simon Riggs wrote:
>> On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes wrote:
>>> On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs wrote:
Yes, it was. Sorry about that. New version attached, retesting while
On Sun, Jan 29, 2012 at 12:18 PM, Simon Riggs wrote:
> On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes wrote:
>> On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs wrote:
>>>
>>> Yes, it was. Sorry about that. New version attached, retesting while
>>> you read this.
>>
>> In my hands I could never get th
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes wrote:
> On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs wrote:
>>
>> Yes, it was. Sorry about that. New version attached, retesting while
>> you read this.
>
> In my hands I could never get this patch to do anything. The new
> cache was never used.
>
>
On Sat, Jan 28, 2012 at 1:52 PM, Simon Riggs wrote:
>> Also, I think the general approach is wrong. The only reason to have
>> these pages in shared memory is that we can control access to them to
>> prevent write/write and read/write corruption. Since these pages are
>> never written, they don
On Thu, Jan 12, 2012 at 4:49 AM, Simon Riggs wrote:
> On Thu, Jan 5, 2012 at 6:26 PM, Simon Riggs wrote:
>
>> Patch to remove clog contention caused by dirty clog LRU.
>
> v2, minor changes, updated for recent commits
This no longer applies to file src/backend/postmaster/bgwriter.c, due
to the l
On Fri, Jan 27, 2012 at 10:05 PM, Jeff Janes wrote:
> On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs wrote:
>>
>> Yes, it was. Sorry about that. New version attached, retesting while
>> you read this.
>
> In my hands I could never get this patch to do anything. The new
> cache was never used.
>
>
On Fri, Jan 27, 2012 at 3:16 PM, Merlin Moncure wrote:
> On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes wrote:
>> Also, I think the general approach is wrong. The only reason to have
>> these pages in shared memory is that we can control access to them to
>> prevent write/write and read/write corru
On Fri, Jan 27, 2012 at 4:05 PM, Jeff Janes wrote:
> Also, I think the general approach is wrong. The only reason to have
> these pages in shared memory is that we can control access to them to
> prevent write/write and read/write corruption. Since these pages are
> never written, they don't nee
On Sat, Jan 21, 2012 at 7:31 AM, Simon Riggs wrote:
>
> Yes, it was. Sorry about that. New version attached, retesting while
> you read this.
In my hands I could never get this patch to do anything. The new
cache was never used.
I think that that was because RecentXminPageno never budged from -
On Fri, Jan 20, 2012 at 6:44 AM, Simon Riggs wrote:
>
> OT: It would save lots of time if we had 2 things for the CF app:
>
..
> 2. Something that automatically tests patches. If you submit a patch
> we run up a blank VM and run patch applies on all patches. As soon as
> we get a fail, an email go
On Sat, Jan 21, 2012 at 1:57 PM, Robert Haas wrote:
> On Fri, Jan 20, 2012 at 10:44 AM, Robert Haas wrote:
D'oh. You're right. Looks like I accidentally tried to apply this to
the 9.1 sources. Sigh...
>>>
>>> No worries. It's Friday.
>
> Server passed 'make check' with this patch, bu
On Fri, Jan 20, 2012 at 10:44 AM, Robert Haas wrote:
>>> D'oh. You're right. Looks like I accidentally tried to apply this to
>>> the 9.1 sources. Sigh...
>>
>> No worries. It's Friday.
Server passed 'make check' with this patch, but when I tried to fire
it up for some test runs, it fell over
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
> On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
>> I've taken that idea and used it to build a second Clog cache, known
>> as ClogHistory which allows access to the read-only tail of pages in
>> the clog. Once a page has been written to for
On Fri, Jan 20, 2012 at 10:38 AM, Simon Riggs wrote:
> On Fri, Jan 20, 2012 at 3:32 PM, Robert Haas wrote:
>> On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs wrote:
>>> On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
> I've taken th
On Fri, Jan 20, 2012 at 3:32 PM, Robert Haas wrote:
> On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs wrote:
>> On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
>>> On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
I've taken that idea and used it to build a second Clog cache, known
On Fri, Jan 20, 2012 at 10:16 AM, Simon Riggs wrote:
> On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
>> On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
>>> I've taken that idea and used it to build a second Clog cache, known
>>> as ClogHistory which allows access to the read-only tail o
On Fri, Jan 20, 2012 at 9:44 AM, Simon Riggs wrote:
> On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
>> On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
>>> I've taken that idea and used it to build a second Clog cache, known
>>> as ClogHistory which allows access to the read-only tail of
On Fri, Jan 20, 2012 at 1:37 PM, Robert Haas wrote:
> On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
>> I've taken that idea and used it to build a second Clog cache, known
>> as ClogHistory which allows access to the read-only tail of pages in
>> the clog. Once a page has been written to for
On Sun, Jan 8, 2012 at 9:25 AM, Simon Riggs wrote:
> I've taken that idea and used it to build a second Clog cache, known
> as ClogHistory which allows access to the read-only tail of pages in
> the clog. Once a page has been written to for the last time, it will
> be accessed via the ClogHistory
On Thu, Jan 5, 2012 at 6:26 PM, Simon Riggs wrote:
> Patch to remove clog contention caused by dirty clog LRU.
v2, minor changes, updated for recent commits
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
diff --git a/src
On Sun, Jan 8, 2012 at 2:25 PM, Simon Riggs wrote:
> I've taken that idea and used it to build a second Clog cache, known
> as ClogHistory which allows access to the read-only tail of pages in
> the clog. Once a page has been written to for the last time, it will
> be accessed via the ClogHistory
On Fri, Jan 6, 2012 at 11:05 AM, Tom Lane wrote:
> Robert Haas writes:
>> After thinking about this a bit, I think the problem is that the
>> divisor we picked is still too high. Suppose we set num_clog_buffers
>> = (shared_buffers / 4MB), with a minimum of 4 and maximum of 32.
>
> Works for me.
On Fri, Jan 6, 2012 at 3:55 PM, Tom Lane wrote:
> Simon Riggs writes:
>> Please can we either make it user configurable?
>
> Weren't you just complaining that *I* was overcomplicating things?
> I see no evidence to justify inventing a user-visible GUC here.
> We have rough consensus on both the n
Robert Haas writes:
> After thinking about this a bit, I think the problem is that the
> divisor we picked is still too high. Suppose we set num_clog_buffers
> = (shared_buffers / 4MB), with a minimum of 4 and maximum of 32.
Works for me.
regards, tom lane
--
Sent via
Simon Riggs writes:
> Please can we either make it user configurable?
Weren't you just complaining that *I* was overcomplicating things?
I see no evidence to justify inventing a user-visible GUC here.
We have rough consensus on both the need for and the shape of a formula,
with just minor discuss
On Thu, Jan 5, 2012 at 5:34 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane wrote:
>>> I would be in favor of that, or perhaps some other formula (eg, maybe
>>> the minimum should be less than 8 for when you've got very little shmem).
>
>> I have some result
On Thu, Jan 5, 2012 at 10:34 PM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane wrote:
>>> I would be in favor of that, or perhaps some other formula (eg, maybe
>>> the minimum should be less than 8 for when you've got very little shmem).
>
>> I have some resul
Robert Haas writes:
> On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane wrote:
>> I would be in favor of that, or perhaps some other formula (eg, maybe
>> the minimum should be less than 8 for when you've got very little shmem).
> I have some results that show that, under the right set of
> circumstances
On Thu, Jan 5, 2012 at 2:25 PM, Robert Haas wrote:
> On Thu, Jan 5, 2012 at 2:44 PM, Kevin Grittner
> wrote:
>> If we go with such a formula, I think 32 MB would be a more
>> appropriate divisor than 128 MB. Even on very large machines where
>> 32 CLOG buffers would be a clear win, we often can'
On Thu, Jan 5, 2012 at 2:57 PM, Tom Lane wrote:
> I would be in favor of that, or perhaps some other formula (eg, maybe
> the minimum should be less than 8 for when you've got very little shmem).
I have some results that show that, under the right set of
circumstances, 8->32 is a win, and I can q
On Thu, Jan 5, 2012 at 2:44 PM, Kevin Grittner
wrote:
> If we go with such a formula, I think 32 MB would be a more
> appropriate divisor than 128 MB. Even on very large machines where
> 32 CLOG buffers would be a clear win, we often can't go above 1 or 2
> GB of shared_buffers without hitting la
Simon Riggs writes:
> Parameterised slru buffer sizes were proposed about for 8.3 and opposed by
> you.
> I guess we all reserve the right to change our minds...
When presented with new data, sure. Robert's results offer a reason to
worry about this, which we did not have before now.
On Thu, Jan 5, 2012 at 7:57 PM, Tom Lane wrote:
> I think that the reason it's historically been a constant is that the
> original coding took advantage of having a compile-time-constant number
> of buffers --- but since we went over to the common SLRU infrastructure
> for several different logs,
Simon Riggs writes:
> On Thu, Jan 5, 2012 at 7:26 PM, Robert Haas wrote:
>> On the other hand, I think there's a decent argument that he should
>> change his opinion, because 192kB of memory is not a lot. However,
>> what I mostly want is something that nobody hates, so we can get it
>> committe
Robert Haas writes:
> I would like to do that, but I think we need to at least figure out a
> way to provide an escape hatch for people without much shared memory.
> We could do that, perhaps, by using a formula like this:
> 1 CLOG buffer per 128MB of shared_buffers, with a minimum of 8 and a
> m
Robert Haas wrote:
> Simon Riggs wrote:
>> Robert Haas wrote:
>>> Simon Riggs wrote:
Let's commit the change to 32.
>>>
>>> I would like to do that, but I think we need to at least figure
>>> out a way to provide an escape hatch for people without much
>>> shared memory. We could do that,
Excerpts from Simon Riggs's message of jue ene 05 16:21:31 -0300 2012:
> On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas wrote:
> > On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
> >> Let's commit the change to 32.
> >
> > I would like to do that, but I think we need to at least figure out a
>
On Thu, Jan 5, 2012 at 7:26 PM, Robert Haas wrote:
> On Thu, Jan 5, 2012 at 2:21 PM, Simon Riggs wrote:
>> On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas wrote:
>>> On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
Let's commit the change to 32.
>>>
>>> I would like to do that, but I think
On Thu, Jan 5, 2012 at 1:12 PM, Robert Haas wrote:
> On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
>> Let's commit the change to 32.
>
> I would like to do that, but I think we need to at least figure out a
> way to provide an escape hatch for people without much shared memory.
> We could d
On Thu, Jan 5, 2012 at 2:21 PM, Simon Riggs wrote:
> On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas wrote:
>> On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
>>> Let's commit the change to 32.
>>
>> I would like to do that, but I think we need to at least figure out a
>> way to provide an escap
On Thu, Jan 5, 2012 at 7:12 PM, Robert Haas wrote:
> On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
>> Let's commit the change to 32.
>
> I would like to do that, but I think we need to at least figure out a
> way to provide an escape hatch for people without much shared memory.
> We could d
On Thu, Jan 5, 2012 at 11:10 AM, Simon Riggs wrote:
> Let's commit the change to 32.
I would like to do that, but I think we need to at least figure out a
way to provide an escape hatch for people without much shared memory.
We could do that, perhaps, by using a formula like this:
1 CLOG buffer
On Thu, Jan 5, 2012 at 4:04 PM, Robert Haas wrote:
> I hypothesize that there are actually two kinds of latency spikes
> here. Just taking a wild guess, I wonder if the *remaining* latency
> spikes are caused by the effect that you mentioned before: namely, the
> need to write an old CLOG page e
Simon Riggs wrote:
> Robert Haas wrote:
>> So it seems that at least on this machine, increasing the number
>> of CLOG buffers both improves performance and reduces latency.
>
> I believed before that the increase was worthwhile and now even
> more so.
>
> Let's commit the change to 32.
+1
On Thu, Jan 5, 2012 at 4:04 PM, Robert Haas wrote:
> It appears to me that increasing the number of CLOG buffers reduced
> the severity of the latency spikes considerably. In the last 100
> seconds, for example, master has several spikes in the 500-700ms
> range, but with 32 CLOG buffers it neve
On Tue, Dec 27, 2011 at 5:23 AM, Simon Riggs wrote:
> On Sat, Dec 24, 2011 at 9:25 AM, Simon Riggs wrote:
>> On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas wrote:
>
>>> Also, if it is that, what do we do about it? I don't think any of the
>>> ideas proposed so far are going to help much.
>>
>> If
On Dec 20, 2011, at 11:29 PM, Tom Lane wrote:
> Robert Haas writes:
>> So, what do we do about this? The obvious answer is "increase
>> NUM_CLOG_BUFFERS", and I'm not sure that's a bad idea.
>
> As you say, that's likely to hurt people running in small shared
> memory. I too have thought about
On Sat, Dec 24, 2011 at 9:25 AM, Simon Riggs wrote:
> On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas wrote:
>> Also, if it is that, what do we do about it? I don't think any of the
>> ideas proposed so far are going to help much.
>
> If you don't like guessing, don't guess, don't think. Just meas
On Thu, Dec 22, 2011 at 4:20 PM, Robert Haas wrote:
> You mentioned "latency" so this morning I ran pgbench with -l and
> graphed the output. There are latency spikes every few seconds. I'm
> attaching the overall graph as well as the graph of the last 100
> seconds, where the spikes are easier
On Thu, Dec 22, 2011 at 1:04 AM, Simon Riggs wrote:
> I understand why you say that and take no offence. All I can say is
> last time I has access to a good test rig and well structured
> reporting and analysis I was able to see evidence of what I described
> to you here.
>
> I no longer have that
On Thu, Dec 22, 2011 at 12:28 AM, Robert Haas wrote:
> But on the flip side, I feel like your discussion of the problems is a
> bit hand-wavy. I think we need some real test cases that we can look
> at and measure, not just an informal description of what we think is
> happening.
I understand w
On Wed, Dec 21, 2011 at 4:17 PM, Simon Riggs wrote:
> Partitioning will give us more buffers and more LWlocks, to spread the
> contention when we access the buffers. I use that word because its
> what we call the technique already used in the buffer manager and lock
> manager. If you wish to call
On Wed, Dec 21, 2011 at 12:48 PM, Robert Haas wrote:
> On the other hand, if we just want to avoid having more requests
> simultaneously in flight than we have buffers, so that backends don't
> need to wait for an available buffer before beginning their I/O, then
> something on the order of the nu
On Wed, Dec 21, 2011 at 2:05 PM, Simon Riggs wrote:
> On Wed, Dec 21, 2011 at 3:24 PM, Robert Haas wrote:
>> I think there probably are some scalability limits to the current
>> implementation, but also I think we could probably increase the
>> current value modestly with something less than a to
On Wed, Dec 21, 2011 at 3:24 PM, Robert Haas wrote:
> I think there probably are some scalability limits to the current
> implementation, but also I think we could probably increase the
> current value modestly with something less than a total rewrite.
> Linearly scanning the slot array won't sca
On Wed, Dec 21, 2011 at 1:09 PM, Tom Lane wrote:
> It strikes me that one simple thing we could do is extend the current
> heuristic that says "pin the latest page". That is, pin the last K
> pages into SLRU, and apply LRU or some other method across the rest.
> If K is large enough, that should
Robert Haas writes:
> On Wed, Dec 21, 2011 at 11:48 AM, Tom Lane wrote:
>> I'm inclined to think that that specific arrangement wouldn't be good.
>> The normal access pattern for CLOG is, I believe, an exponentially
>> decaying probability-of-access for each page as you go further back from
>> cu
On Wed, Dec 21, 2011 at 11:48 AM, Tom Lane wrote:
> Agreed, the question is whether 32 is enough to fix the problem for
> anything except this one benchmark.
Right. My thought on that topic is that it depends on what you mean
by "fix". It's clearly NOT possible to keep enough CLOG buffers
aroun
On Wed, Dec 21, 2011 at 3:28 PM, Robert Haas wrote:
> On Wed, Dec 21, 2011 at 5:17 AM, Simon Riggs wrote:
>> With the increased performance we have now, I don't think increasing
>> that alone will be that useful since it doesn't solve all of the
>> problems and (I am told) likely increases lookup
Robert Haas writes:
> I think there probably are some scalability limits to the current
> implementation, but also I think we could probably increase the
> current value modestly with something less than a total rewrite.
> Linearly scanning the slot array won't scale indefinitely, but I think
> it
Excerpts from Robert Haas's message of mié dic 21 13:18:36 -0300 2011:
> There may be workloads where that will help, but it's definitely not
> going to cover all cases. Consider my trusty
> pgbench-at-scale-factor-100 test case: since the working set fits
> inside shared buffers, we're only wri
On Wed, Dec 21, 2011 at 10:51 AM, Kevin Grittner
wrote:
> Robert Haas wrote:
>> Any thoughts on what makes most sense here? I find it fairly
>> tempting to just crank up NUM_CLOG_BUFFERS and call it good,
>
> The only thought I have to add to discussion so far is that the need
> to do anything m
Robert Haas wrote:
> Any thoughts on what makes most sense here? I find it fairly
> tempting to just crank up NUM_CLOG_BUFFERS and call it good,
The only thought I have to add to discussion so far is that the need
to do anything may be reduced significantly by any work to write
hint bits more
On Wed, Dec 21, 2011 at 5:17 AM, Simon Riggs wrote:
> With the increased performance we have now, I don't think increasing
> that alone will be that useful since it doesn't solve all of the
> problems and (I am told) likely increases lookup speed.
I have benchmarks showing that it works, for what
On Wed, Dec 21, 2011 at 12:33 AM, Tom Lane wrote:
> Oh btw, I haven't looked at that code recently, but I have a nasty
> feeling that there are parts of it that assume that the number of
> buffers it is managing is fairly small. Cranking up the number
> might require more work than just changing
On Wed, Dec 21, 2011 at 5:33 AM, Tom Lane wrote:
> Robert Haas writes:
>> ... while the main buffer manager is
>> content with some loosey-goosey approximation of recency, the SLRU
>> code makes a fervent attempt at strict LRU (slightly compromised for
>> the sake of reduced locking in SimpleLruR
Robert Haas writes:
> ... while the main buffer manager is
> content with some loosey-goosey approximation of recency, the SLRU
> code makes a fervent attempt at strict LRU (slightly compromised for
> the sake of reduced locking in SimpleLruReadPage_Readonly).
Oh btw, I haven't looked at that cod
Robert Haas writes:
> So, what do we do about this? The obvious answer is "increase
> NUM_CLOG_BUFFERS", and I'm not sure that's a bad idea.
As you say, that's likely to hurt people running in small shared
memory. I too have thought about merging the SLRU areas into the main
shared buffer arena
79 matches
Mail list logo