On Thu, May 15, 2014 at 8:06 AM, Bruce Momjian wrote:
> On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
>> > Well, for what it's worth, I've encountered systems where setting
>> > effective_cache_size too low resulted in bad query plans, but I've
>> > never encountered the reverse sit
On Thu, May 15, 2014 at 11:36:51PM +0900, Amit Langote wrote:
> > No, all memory allocat is per-process, except for shared memory. We
> > probably need a way to record our large local memory allocations in
> > PGPROC that other backends can see; same for effective cache size
> > assumptions we ma
On Thu, May 15, 2014 at 11:24 PM, Bruce Momjian wrote:
> On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
>> On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian wrote:
>> >
>> > This is the same problem we had with auto-tuning work_mem, in that we
>> > didn't know what other concurrent ac
On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
> On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian wrote:
> >
> > This is the same problem we had with auto-tuning work_mem, in that we
> > didn't know what other concurrent activity was happening. Seems we need
> > concurrent activity d
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian wrote:
>
> This is the same problem we had with auto-tuning work_mem, in that we
> didn't know what other concurrent activity was happening. Seems we need
> concurrent activity detection before auto-tuning work_mem and
> effective_cache_size.
>
Perh
On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
> > Well, for what it's worth, I've encountered systems where setting
> > effective_cache_size too low resulted in bad query plans, but I've
> > never encountered the reverse situation.
>
> I agree with that.
>
> Though that misses my p
On Wed, May 7, 2014 at 12:06 PM, Josh Berkus wrote:
> For that matter, our advice on shared_buffers ... and our design for it
> ... is going to need to change radically soon, since Linux is getting an
> ARC with a frequency cache as well as a recency cache, and FreeBSD and
> OpenSolaris already ha
On 2014-05-07 16:24:53 -0500, Merlin Moncure wrote:
> On Wed, May 7, 2014 at 4:15 PM, Andres Freund wrote:
> > On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
> >> On Wed, May 7, 2014 at 11:38 AM, Andres Freund
> >> wrote:
> >>
> >> > On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
> >> > >
>
On Wed, May 7, 2014 at 2:24 PM, Merlin Moncure wrote:
> right. This is, IMNSHO, exactly the sort of language that belongs in the
> docs.
+1
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
On Wed, May 7, 2014 at 4:15 PM, Andres Freund wrote:
> On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
>> On Wed, May 7, 2014 at 11:38 AM, Andres Freund wrote:
>>
>> > On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
>> > >
>> > > *) raising shared buffers does not 'give more memory to postgres
On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
> On Wed, May 7, 2014 at 11:38 AM, Andres Freund wrote:
>
> > On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
> > >
> > > *) raising shared buffers does not 'give more memory to postgres for
> > > caching' -- it can only reduce it via double pagi
On Wed, May 7, 2014 at 11:38 AM, Andres Freund wrote:
> On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
> >
> > *) raising shared buffers does not 'give more memory to postgres for
> > caching' -- it can only reduce it via double paging
>
> That's absolutely not a necessary consequence. If pag
On 05/07/2014 01:36 PM, Jeff Janes wrote:
> On Wed, May 7, 2014 at 11:04 AM, Josh Berkus wrote:
>> Unfortunately nobody has the time/resources to do the kind of testing
>> required for a new recommendation for shared_buffers.
> I think it is worse than that. I don't think we know what such test
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus wrote:
> On 05/06/2014 10:35 PM, Peter Geoghegan wrote:
> > +1. In my view, we probably should have set it to a much higher
> > absolute default value. The main problem with setting it to any
> > multiple of shared_buffers that I can see is that shared
On Wed, May 7, 2014 at 2:58 PM, Peter Geoghegan wrote:
> On Wed, May 7, 2014 at 11:50 AM, Robert Haas wrote:
>> But that does not mean, as the phrase "folk
>> wisdom" might be taken to imply, that we don't know anything at all
>> about what actually works well in practice.
>
> Folk wisdom doesn't
On Wed, May 7, 2014 at 11:50 AM, Robert Haas wrote:
>> Doesn't match my experience. Even with the current buffer manager
>> there's usually enough locality to keep important pages in s_b for a
>> meaningful time. I *have* seen workloads that should have fit into
>> memory not fit because of double
On 05/07/2014 11:52 AM, Peter Geoghegan wrote:
> On Wed, May 7, 2014 at 11:40 AM, Josh Berkus wrote:
>> So, as one of several people who put literally hundreds of hours into
>> the original benchmarking which established the sizing recommendations
>> for shared_buffers (and other settings), I find
On Wed, May 7, 2014 at 11:50 AM, Robert Haas wrote:
> But that does not mean, as the phrase "folk
> wisdom" might be taken to imply, that we don't know anything at all
> about what actually works well in practice.
Folk wisdom doesn't imply that. It implies that we think this works,
and we may wel
On Wed, May 7, 2014 at 11:40 AM, Josh Berkus wrote:
> So, as one of several people who put literally hundreds of hours into
> the original benchmarking which established the sizing recommendations
> for shared_buffers (and other settings), I find the phrase "folk wisdom"
> personally offensive. S
On 2014-05-07 11:45:04 -0700, Peter Geoghegan wrote:
> On Wed, May 7, 2014 at 11:38 AM, Andres Freund wrote:
> >> *) raising shared buffers does not 'give more memory to postgres for
> >> caching' -- it can only reduce it via double paging
> >
> > That's absolutely not a necessary consequence. If
On Wed, May 7, 2014 at 2:49 PM, Andres Freund wrote:
> On 2014-05-07 11:45:04 -0700, Peter Geoghegan wrote:
>> On Wed, May 7, 2014 at 11:38 AM, Andres Freund
>> wrote:
>> >> *) raising shared buffers does not 'give more memory to postgres for
>> >> caching' -- it can only reduce it via double pa
On Wed, May 7, 2014 at 2:40 PM, Josh Berkus wrote:
> On 05/07/2014 11:13 AM, Peter Geoghegan wrote:
>> We ought to be realistic about the fact that the current
>> recommendations around sizing shared_buffers are nothing more than
>> folk wisdom. That's the best we have right now, but that seems qu
On Wed, May 7, 2014 at 11:38 AM, Andres Freund wrote:
>> *) raising shared buffers does not 'give more memory to postgres for
>> caching' -- it can only reduce it via double paging
>
> That's absolutely not a necessary consequence. If pages are in s_b for a
> while the OS will be perfectly happy t
On 05/07/2014 11:13 AM, Peter Geoghegan wrote:
> We ought to be realistic about the fact that the current
> recommendations around sizing shared_buffers are nothing more than
> folk wisdom. That's the best we have right now, but that seems quite
> unsatisfactory to me.
So, as one of several people
On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
> On Wed, May 7, 2014 at 1:13 PM, Peter Geoghegan wrote:
> > On Wed, May 7, 2014 at 11:04 AM, Josh Berkus wrote:
> >> Unfortunately nobody has the time/resources to do the kind of testing
> >> required for a new recommendation for shared_buffers
On Wed, May 7, 2014 at 1:13 PM, Peter Geoghegan wrote:
> On Wed, May 7, 2014 at 11:04 AM, Josh Berkus wrote:
>> Unfortunately nobody has the time/resources to do the kind of testing
>> required for a new recommendation for shared_buffers.
>
> I meant to suggest that the buffer manager could be im
On Tue, May 6, 2014 at 9:55 AM, Andres Freund wrote:
> On 2014-05-06 17:43:45 +0100, Simon Riggs wrote:
>
> > All this changes is the cost of
> > IndexScans that would use more than 25% of shared_buffers worth of
> > data. Hopefully not many of those in your workload. Changing the cost
> > doesn
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus wrote:
> Unfortunately nobody has the time/resources to do the kind of testing
> required for a new recommendation for shared_buffers.
I meant to suggest that the buffer manager could be improved to the
point that the old advice becomes obsolete. Right
On 05/06/2014 10:35 PM, Peter Geoghegan wrote:
> +1. In my view, we probably should have set it to a much higher
> absolute default value. The main problem with setting it to any
> multiple of shared_buffers that I can see is that shared_buffers is a
> very poor proxy for what effective_cache_size
On 05/07/2014 07:31 AM, Andrew Dunstan wrote:
> +1. If we ever want to implement an auto-tuning heuristic it seems we're
> going to need some hard empirical evidence to support it, and that
> doesn't seem likely to appear any time soon.
4GB default it is, then.
--
Josh Berkus
PostgreSQL Experts
On 7 May 2014 15:10, Merlin Moncure wrote:
> The core issues are:
> 1) There is no place to enter total system memory available to the
> database in postgresql.conf
> 2) Memory settings (except for the above) are given as absolute
> amounts, not percentages.
Those sound useful starting points.
On 7 May 2014 15:07, Tom Lane wrote:
> Simon Riggs writes:
>> I think I'm arguing myself towards using a BufferAccessStrategy of
>> BAS_BULKREAD for large IndexScans, BitMapIndexScans and
>> BitMapHeapScans.
>
> As soon as you've got some hard evidence to present in favor of such
> changes, we ca
On 05/07/2014 10:12 AM, Andres Freund wrote:
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
In the meantime, it seems like there is an emerging consensus that nobody
much likes the existing auto-tuning behavior for effective_cache_size,
and that we should revert that in favor of just increasing
Robert Haas writes:
> On Wed, May 7, 2014 at 3:18 AM, Simon Riggs wrote:
>> If we believe that 25% of shared_buffers worth of heap blocks would
>> flush the cache doing a SeqScan, why should we allow 400% of
>> shared_buffers worth of index blocks?
> I think you're comparing apples and oranges.
On Wed, May 7, 2014 at 4:12 PM, Andres Freund wrote:
> On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
> > In the meantime, it seems like there is an emerging consensus that nobody
> > much likes the existing auto-tuning behavior for effective_cache_size,
> > and that we should revert that in favor
On Wed, May 7, 2014 at 10:12 AM, Andres Freund wrote:
> On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
>> In the meantime, it seems like there is an emerging consensus that nobody
>> much likes the existing auto-tuning behavior for effective_cache_size,
>> and that we should revert that in favor of
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
> In the meantime, it seems like there is an emerging consensus that nobody
> much likes the existing auto-tuning behavior for effective_cache_size,
> and that we should revert that in favor of just increasing the fixed
> default value significantly. I
On Wed, May 7, 2014 at 9:07 AM, Tom Lane wrote:
> Simon Riggs writes:
>> I think I'm arguing myself towards using a BufferAccessStrategy of
>> BAS_BULKREAD for large IndexScans, BitMapIndexScans and
>> BitMapHeapScans.
>
> As soon as you've got some hard evidence to present in favor of such
> cha
Simon Riggs writes:
> I think I'm arguing myself towards using a BufferAccessStrategy of
> BAS_BULKREAD for large IndexScans, BitMapIndexScans and
> BitMapHeapScans.
As soon as you've got some hard evidence to present in favor of such
changes, we can discuss it. I've got other things to do besid
On 7 May 2014 13:31, Robert Haas wrote:
> On Wed, May 7, 2014 at 3:18 AM, Simon Riggs wrote:
>> If we believe that 25% of shared_buffers worth of heap blocks would
>> flush the cache doing a SeqScan, why should we allow 400% of
>> shared_buffers worth of index blocks?
>
> I think you're comparing
On Wed, May 7, 2014 at 3:18 AM, Simon Riggs wrote:
> If we believe that 25% of shared_buffers worth of heap blocks would
> flush the cache doing a SeqScan, why should we allow 400% of
> shared_buffers worth of index blocks?
I think you're comparing apples and oranges. The 25% threshold is
answer
On 6 May 2014 17:55, Andres Freund wrote:
>> All this changes is the cost of
>> IndexScans that would use more than 25% of shared_buffers worth of
>> data. Hopefully not many of those in your workload. Changing the cost
>> doesn't necessarily prevent index scans either. And if there are many
>> o
On 07/05/14 17:35, Peter Geoghegan wrote:
On Tue, May 6, 2014 at 10:20 PM, Simon Riggs wrote:
On 6 May 2014 23:47, Josh Berkus wrote:
If you're going to make
an argument in favor of different tuning advice, then do it based on
something in which you actually believe, based on hard evidence.
On Tue, May 6, 2014 at 10:20 PM, Simon Riggs wrote:
> On 6 May 2014 23:47, Josh Berkus wrote:
>
>> If you're going to make
>> an argument in favor of different tuning advice, then do it based on
>> something in which you actually believe, based on hard evidence.
>
> The proposed default setting o
On 6 May 2014 23:28, Tom Lane wrote:
> Robert Haas writes:
>> I basically think the auto-tuning we've installed for
>> effective_cache_size is stupid. Most people are going to run with
>> only a few GB of shared_buffers, so setting effective_cache_size to a
>> small multiple of that isn't going
On 6 May 2014 23:47, Josh Berkus wrote:
> If you're going to make
> an argument in favor of different tuning advice, then do it based on
> something in which you actually believe, based on hard evidence.
The proposed default setting of 4x shared_buffers is unprincipled
*and* lacks hard evidence
Robert, Tom:
On 05/06/2014 03:28 PM, Tom Lane wrote:
> Robert Haas writes:
>> I basically think the auto-tuning we've installed for
>> effective_cache_size is stupid. Most people are going to run with
>> only a few GB of shared_buffers, so setting effective_cache_size to a
>> small multiple of t
On 05/06/2014 01:38 PM, Simon Riggs wrote:
>> Most of them? Really?
>
> I didn't use the word "most" anywhere. So not really clear what you are
> saying.
Sorry, those were supposed to be periods, not question marks. As in
"Most of them. Really."
>> I have to tell you, your post sounds like y
Robert Haas writes:
> I basically think the auto-tuning we've installed for
> effective_cache_size is stupid. Most people are going to run with
> only a few GB of shared_buffers, so setting effective_cache_size to a
> small multiple of that isn't going to make many more people happy than
> just r
On 6 May 2014 22:54, Robert Haas wrote:
> On Tue, May 6, 2014 at 4:38 PM, Simon Riggs wrote:
>> I read the code, think what to say and then say what I think, not
>> rely on dogma.
>>
>> I tried to help years ago by changing the docs on e_c_s, but that's
>> been mostly ignored down the years, as
On 05/06/2014 05:54 PM, Robert Haas wrote:
On Tue, May 6, 2014 at 4:38 PM, Simon Riggs wrote:
I read the code, think what to say and then say what I think, not
rely on dogma.
I tried to help years ago by changing the docs on e_c_s, but that's
been mostly ignored down the years, as it is aga
On Tue, May 6, 2014 at 4:38 PM, Simon Riggs wrote:
> I read the code, think what to say and then say what I think, not
> rely on dogma.
>
> I tried to help years ago by changing the docs on e_c_s, but that's
> been mostly ignored down the years, as it is again here.
Well, for what it's worth, I'
On 6 May 2014 20:41, Jeff Janes wrote:
> The e_c_s is assumed to be usable for each backend trying to run queries
> sensitive to it. If you have dozens of such queries running simultaneously
> (not something I personally witness, but also not insane) and each of these
> queries has its own pecul
On 6 May 2014 18:08, Josh Berkus wrote:
> On 05/06/2014 08:41 AM, Simon Riggs wrote:
>> On 6 May 2014 15:18, Tom Lane wrote:
>>> Simon Riggs writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
>>>
On Tue, May 6, 2014 at 7:18 AM, Tom Lane wrote:
> Simon Riggs writes:
> > Lets fix e_c_s at 25% of shared_buffers and remove the parameter
> > completely, just as we do with so many other performance parameters.
>
> Apparently, you don't even understand what this parameter is for.
> Setting it s
On 05/06/2014 08:41 AM, Simon Riggs wrote:
> On 6 May 2014 15:18, Tom Lane wrote:
>> Simon Riggs writes:
>>> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
>>> completely, just as we do with so many other performance parameters.
>>
>> Apparently, you don't even understand what t
On 2014-05-06 17:43:45 +0100, Simon Riggs wrote:
> On 6 May 2014 15:17, Andres Freund wrote:
>
> >> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
> >> completely, just as we do with so many other performance parameters.
> >
> > That'd cause *massive* regression for many install
Simon Riggs writes:
> On 6 May 2014 15:18, Tom Lane wrote:
>> Simon Riggs writes:
>>> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
>>> completely, just as we do with so many other performance parameters.
>> Apparently, you don't even understand what this parameter is for.
>>
On 6 May 2014 15:17, Andres Freund wrote:
>> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
>> completely, just as we do with so many other performance parameters.
>
> That'd cause *massive* regression for many installations. Without
> significantly overhauling costsize.c that's
On 6 May 2014 15:18, Tom Lane wrote:
> Simon Riggs writes:
>> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
>> completely, just as we do with so many other performance parameters.
>
> Apparently, you don't even understand what this parameter is for.
> Setting it smaller than sh
Simon Riggs writes:
> Lets fix e_c_s at 25% of shared_buffers and remove the parameter
> completely, just as we do with so many other performance parameters.
Apparently, you don't even understand what this parameter is for.
Setting it smaller than shared_buffers is insane.
On 2014-05-06 15:09:15 +0100, Simon Riggs wrote:
> On 8 October 2013 17:13, Bruce Momjian wrote:
>
> > Patch applied with a default of 4x shared buffers. I have added a 9.4
> > TODO that we might want to revisit this.
>
> I certainly want to revisit this patch and this setting.
>
> How can we
On 8 October 2013 17:13, Bruce Momjian wrote:
> Patch applied with a default of 4x shared buffers. I have added a 9.4
> TODO that we might want to revisit this.
I certainly want to revisit this patch and this setting.
How can we possibly justify a default setting that could be more than
physic
On Tue, Oct 8, 2013 at 01:04:18PM -0600, Kevin Hale Boyes wrote:
> The patch contains a small typo in config.sgml. Probably just drop the "is"
> from "is can".
>
> +results if this database cluster is can utilize most of the memory
>
> Kevin.
Thank you, fixed.
--
Bruce Momjian
The patch contains a small typo in config.sgml. Probably just drop the
"is" from "is can".
+results if this database cluster is can utilize most of the memory
Kevin.
On 8 October 2013 10:13, Bruce Momjian wrote:
> On Thu, Sep 5, 2013 at 05:14:37PM -0400, Bruce Momjian wrote:
> > On
On Thu, Sep 5, 2013 at 05:14:37PM -0400, Bruce Momjian wrote:
> On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
> > > I have developed the attached patch which implements an auto-tuned
> > > effective_cache_size which is 4x the size of shared buffers. I had to
> > > set effective
On 2013-09-13 14:04:55 -0700, Kevin Grittner wrote:
> Andres Freund wrote:
>
> > Absolutely not claiming the contrary. I think it sucks that we
> > couldn't fully figure out what's happening in detail. I'd love to
> > get my hand on a setup where it can be reliably reproduced.
>
> I have seen tw
On Fri, Sep 13, 2013 at 4:04 PM, Kevin Grittner wrote:
> Andres Freund wrote:
>
>> Absolutely not claiming the contrary. I think it sucks that we
>> couldn't fully figure out what's happening in detail. I'd love to
>> get my hand on a setup where it can be reliably reproduced.
>
> I have seen two
Andres Freund wrote:
> Absolutely not claiming the contrary. I think it sucks that we
> couldn't fully figure out what's happening in detail. I'd love to
> get my hand on a setup where it can be reliably reproduced.
I have seen two completely different causes for symptoms like this,
and I suspec
On 2013-09-13 11:27:03 -0500, Merlin Moncure wrote:
> On Fri, Sep 13, 2013 at 11:07 AM, Andres Freund
> wrote:
> > On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
> >> The stock documentation advice I probably needs to be revised to so
> >> that's the lesser of 2GB and 25%.
> >
> > I think th
On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
> The stock documentation advice I probably needs to be revised to so
> that's the lesser of 2GB and 25%.
I think that would be a pretty bad idea. There are lots of workloads
where people have postgres happily chugging along with s_b lots bigger
On Fri, Sep 13, 2013 at 11:07 AM, Andres Freund wrote:
> On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
>> The stock documentation advice I probably needs to be revised to so
>> that's the lesser of 2GB and 25%.
>
> I think that would be a pretty bad idea. There are lots of workloads
> where
On Fri, Sep 13, 2013 at 10:08 AM, Robert Haas wrote:
> On Wed, Sep 11, 2013 at 3:40 PM, Josh Berkus wrote:
>>> I think that most of the arguments in this thread drastically
>>> overestimate the precision and the effect of effective_cache_size. The
>>> planner logic behind it basically only uses i
On Wed, Sep 11, 2013 at 3:40 PM, Josh Berkus wrote:
>> I think that most of the arguments in this thread drastically
>> overestimate the precision and the effect of effective_cache_size. The
>> planner logic behind it basically only uses it to calculate things
>> within a single index scan. That a
> I think that most of the arguments in this thread drastically
> overestimate the precision and the effect of effective_cache_size. The
> planner logic behind it basically only uses it to calculate things
> within a single index scan. That alone shows that any precise
> calculation cannot be very
On 2013-09-11 12:53:29 -0400, Bruce Momjian wrote:
> On Wed, Sep 11, 2013 at 12:43:07PM -0300, Alvaro Herrera wrote:
> > Bruce Momjian escribió:
> >
> > > > So, are you saying you like 4x now?
> > >
> > > Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
> > > puts our effec
On Wed, Sep 11, 2013 at 12:27 PM, Bruce Momjian wrote:
>> > Another argument in favor: this is a default setting, and by default,
>> > shared_buffers won't be 25% of RAM.
>>
>> So, are you saying you like 4x now?
>
> Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
> puts ou
On 09/11/2013 08:27 AM, Bruce Momjian wrote:
> On Wed, Sep 11, 2013 at 09:18:30AM -0400, Bruce Momjian wrote:
>> On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
>>> Another argument in favor: this is a default setting, and by default,
>>> shared_buffers won't be 25% of RAM.
>>
>> So, a
On Wed, Sep 11, 2013 at 12:43:07PM -0300, Alvaro Herrera wrote:
> Bruce Momjian escribió:
>
> > > So, are you saying you like 4x now?
> >
> > Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
> > puts our effective_cache_size as 75% of RAM, giving us no room for
> > kernel,
Bruce Momjian escribió:
> > So, are you saying you like 4x now?
>
> Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
> puts our effective_cache_size as 75% of RAM, giving us no room for
> kernel, backend memory, and work_mem usage. If anything it should be
> lower than 3x,
On Wed, Sep 11, 2013 at 09:18:30AM -0400, Bruce Momjian wrote:
> On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
> > Merlin,
> >
> > > I vote 4x on the basis that for this setting (unlike almost all the
> > > other memory settings) the ramifications for setting it too high
> > > gener
On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
> Merlin,
>
> > I vote 4x on the basis that for this setting (unlike almost all the
> > other memory settings) the ramifications for setting it too high
> > generally aren't too bad. Also, the o/s and temporary memory usage as
> > a sha
On Tue, Sep 10, 2013 at 5:08 PM, Josh Berkus wrote:
> Merlin,
>
>> I vote 4x on the basis that for this setting (unlike almost all the
>> other memory settings) the ramifications for setting it too high
>> generally aren't too bad. Also, the o/s and temporary memory usage as
>> a share of total p
Merlin,
> I vote 4x on the basis that for this setting (unlike almost all the
> other memory settings) the ramifications for setting it too high
> generally aren't too bad. Also, the o/s and temporary memory usage as
> a share of total physical memory has been declining over time
If we're doing
On Tue, Sep 10, 2013 at 11:39 AM, Jeff Janes wrote:
> On Mon, Sep 9, 2013 at 6:29 PM, Bruce Momjian wrote:
>> On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
>>> On 09/05/2013 03:30 PM, Merlin Moncure wrote:
>>>
>>> >> Standard advice we've given in the past is 25% shared buffers, 75
On Mon, Sep 9, 2013 at 6:29 PM, Bruce Momjian wrote:
> On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
>> On 09/05/2013 03:30 PM, Merlin Moncure wrote:
>>
>> >> Standard advice we've given in the past is 25% shared buffers, 75%
>> >> effective_cache_size. Which would make EFS *3X* sh
On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
> On 09/05/2013 03:30 PM, Merlin Moncure wrote:
>
> >> Standard advice we've given in the past is 25% shared buffers, 75%
> >> effective_cache_size. Which would make EFS *3X* shared_buffers, not 4X.
> >> Maybe we're changing the conven
Le jeudi 5 septembre 2013 17:14:37 Bruce Momjian a écrit :
> On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
> > > I have developed the attached patch which implements an auto-tuned
> > > effective_cache_size which is 4x the size of shared buffers. I had to
> > > set effective_cac
On 09/05/2013 03:30 PM, Merlin Moncure wrote:
>> Standard advice we've given in the past is 25% shared buffers, 75%
>> effective_cache_size. Which would make EFS *3X* shared_buffers, not 4X.
>> Maybe we're changing the conventional calculation, but I thought I'd
>> point that out.
>
> This was
On Thu, Sep 5, 2013 at 03:11:53PM -0700, Josh Berkus wrote:
> On 09/05/2013 02:16 PM, Bruce Momjian wrote:
> >> Well, the real problem with this patch is that it documents what the
> >> auto-tuning algorithm is; without that commitment, just saying "-1 means
> >> autotune" might be fine.
> >
> >
On Thu, Sep 5, 2013 at 5:11 PM, Josh Berkus wrote:
> On 09/05/2013 02:16 PM, Bruce Momjian wrote:
>>> Well, the real problem with this patch is that it documents what the
>>> auto-tuning algorithm is; without that commitment, just saying "-1 means
>>> autotune" might be fine.
>>
>> OK, but I did t
On 09/05/2013 02:16 PM, Bruce Momjian wrote:
>> Well, the real problem with this patch is that it documents what the
>> auto-tuning algorithm is; without that commitment, just saying "-1 means
>> autotune" might be fine.
>
> OK, but I did this based on wal_buffers, which has a -1 default, calls
>
On Thu, Sep 5, 2013 at 12:48:54PM -0400, Tom Lane wrote:
> Magnus Hagander writes:
> > On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian wrote:
> >> I have developed the attached patch which implements an auto-tuned
> >> effective_cache_size which is 4x the size of shared buffers. I had to
> >> set
On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
> > I have developed the attached patch which implements an auto-tuned
> > effective_cache_size which is 4x the size of shared buffers. I had to
> > set effective_cache_size to its old 128MB default so the EXPLAIN
> > regression test
On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian wrote:
> On Tue, Jan 8, 2013 at 08:40:44PM -0500, Andrew Dunstan wrote:
>>
>> On 01/08/2013 08:08 PM, Tom Lane wrote:
>> >Robert Haas writes:
>> >>On Tue, Jan 8, 2013 at 7:17 PM, Tom Lane wrote:
>> >>>... And I don't especially like the idea of try
Magnus Hagander writes:
> On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian wrote:
>> I have developed the attached patch which implements an auto-tuned
>> effective_cache_size which is 4x the size of shared buffers. I had to
>> set effective_cache_size to its old 128MB default so the EXPLAIN
>> reg
On Tue, Jan 8, 2013 at 08:40:44PM -0500, Andrew Dunstan wrote:
>
> On 01/08/2013 08:08 PM, Tom Lane wrote:
> >Robert Haas writes:
> >>On Tue, Jan 8, 2013 at 7:17 PM, Tom Lane wrote:
> >>>... And I don't especially like the idea of trying to
> >>>make it depend directly on the box's physical RA
On Wed, Jan 9, 2013 at 12:38 AM, Benedikt Grundmann
wrote:
> For what it is worth even if it is a dedicated database box 75% might be way
> too high. I remember investigating bad performance on our biggest database
> server, that in the end turned out to be a too high setting of
> effective_cache
Josh Berkus wrote:
> The, shared_buffers, wal_buffers, and effective_cache_size (and possible
> other future settings) can be set to -1. If they are set to -1, then we
> use the figure:
>
> shared_buffers = available_ram * 0.25
> (with a ceiling of 8GB)
> wal_buffers = available_ram * 0.05
> (wit
Claudio,
> Not really. I'm convinced, and not only for e_c_s, that
> autoconfiguration is within the realm of possibility.
Hey, if you can do it, my hat's off to you.
> In any case, as eavesdroppers can infer a cryptographic key by timing
> operations or measuring power consumption, I'm pretty s
1 - 100 of 112 matches
Mail list logo