On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
Well, for what it's worth, I've encountered systems where setting
effective_cache_size too low resulted in bad query plans, but I've
never encountered the reverse situation.
I agree with that.
Though that misses my point,
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know what other concurrent activity was happening. Seems we need
concurrent activity detection before auto-tuning work_mem and
On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know what other concurrent activity was happening. Seems we need
concurrent
On Thu, May 15, 2014 at 11:24 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, May 15, 2014 at 10:23:19PM +0900, Amit Langote wrote:
On Thu, May 15, 2014 at 9:06 PM, Bruce Momjian br...@momjian.us wrote:
This is the same problem we had with auto-tuning work_mem, in that we
didn't know
On Thu, May 15, 2014 at 11:36:51PM +0900, Amit Langote wrote:
No, all memory allocat is per-process, except for shared memory. We
probably need a way to record our large local memory allocations in
PGPROC that other backends can see; same for effective cache size
assumptions we make.
On Thu, May 15, 2014 at 8:06 AM, Bruce Momjian br...@momjian.us wrote:
On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
Well, for what it's worth, I've encountered systems where setting
effective_cache_size too low resulted in bad query plans, but I've
never encountered the
On 6 May 2014 17:55, Andres Freund and...@2ndquadrant.com wrote:
All this changes is the cost of
IndexScans that would use more than 25% of shared_buffers worth of
data. Hopefully not many of those in your workload. Changing the cost
doesn't necessarily prevent index scans either. And if
On Wed, May 7, 2014 at 3:18 AM, Simon Riggs si...@2ndquadrant.com wrote:
If we believe that 25% of shared_buffers worth of heap blocks would
flush the cache doing a SeqScan, why should we allow 400% of
shared_buffers worth of index blocks?
I think you're comparing apples and oranges. The 25%
On 7 May 2014 13:31, Robert Haas robertmh...@gmail.com wrote:
On Wed, May 7, 2014 at 3:18 AM, Simon Riggs si...@2ndquadrant.com wrote:
If we believe that 25% of shared_buffers worth of heap blocks would
flush the cache doing a SeqScan, why should we allow 400% of
shared_buffers worth of index
Simon Riggs si...@2ndquadrant.com writes:
I think I'm arguing myself towards using a BufferAccessStrategy of
BAS_BULKREAD for large IndexScans, BitMapIndexScans and
BitMapHeapScans.
As soon as you've got some hard evidence to present in favor of such
changes, we can discuss it. I've got other
On Wed, May 7, 2014 at 9:07 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
I think I'm arguing myself towards using a BufferAccessStrategy of
BAS_BULKREAD for large IndexScans, BitMapIndexScans and
BitMapHeapScans.
As soon as you've got some hard evidence to
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
In the meantime, it seems like there is an emerging consensus that nobody
much likes the existing auto-tuning behavior for effective_cache_size,
and that we should revert that in favor of just increasing the fixed
default value significantly. I
On Wed, May 7, 2014 at 10:12 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
In the meantime, it seems like there is an emerging consensus that nobody
much likes the existing auto-tuning behavior for effective_cache_size,
and that we should revert
On Wed, May 7, 2014 at 4:12 PM, Andres Freund and...@2ndquadrant.comwrote:
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
In the meantime, it seems like there is an emerging consensus that nobody
much likes the existing auto-tuning behavior for effective_cache_size,
and that we should
Robert Haas robertmh...@gmail.com writes:
On Wed, May 7, 2014 at 3:18 AM, Simon Riggs si...@2ndquadrant.com wrote:
If we believe that 25% of shared_buffers worth of heap blocks would
flush the cache doing a SeqScan, why should we allow 400% of
shared_buffers worth of index blocks?
I think
On 05/07/2014 10:12 AM, Andres Freund wrote:
On 2014-05-07 10:07:07 -0400, Tom Lane wrote:
In the meantime, it seems like there is an emerging consensus that nobody
much likes the existing auto-tuning behavior for effective_cache_size,
and that we should revert that in favor of just increasing
On 7 May 2014 15:07, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
I think I'm arguing myself towards using a BufferAccessStrategy of
BAS_BULKREAD for large IndexScans, BitMapIndexScans and
BitMapHeapScans.
As soon as you've got some hard evidence to present in
On 7 May 2014 15:10, Merlin Moncure mmonc...@gmail.com wrote:
The core issues are:
1) There is no place to enter total system memory available to the
database in postgresql.conf
2) Memory settings (except for the above) are given as absolute
amounts, not percentages.
Those sound useful
On 05/07/2014 07:31 AM, Andrew Dunstan wrote:
+1. If we ever want to implement an auto-tuning heuristic it seems we're
going to need some hard empirical evidence to support it, and that
doesn't seem likely to appear any time soon.
4GB default it is, then.
--
Josh Berkus
PostgreSQL Experts
On 05/06/2014 10:35 PM, Peter Geoghegan wrote:
+1. In my view, we probably should have set it to a much higher
absolute default value. The main problem with setting it to any
multiple of shared_buffers that I can see is that shared_buffers is a
very poor proxy for what effective_cache_size is
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:
Unfortunately nobody has the time/resources to do the kind of testing
required for a new recommendation for shared_buffers.
I meant to suggest that the buffer manager could be improved to the
point that the old advice becomes
On Tue, May 6, 2014 at 9:55 AM, Andres Freund and...@2ndquadrant.comwrote:
On 2014-05-06 17:43:45 +0100, Simon Riggs wrote:
All this changes is the cost of
IndexScans that would use more than 25% of shared_buffers worth of
data. Hopefully not many of those in your workload. Changing the
On Wed, May 7, 2014 at 1:13 PM, Peter Geoghegan p...@heroku.com wrote:
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:
Unfortunately nobody has the time/resources to do the kind of testing
required for a new recommendation for shared_buffers.
I meant to suggest that the
On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
On Wed, May 7, 2014 at 1:13 PM, Peter Geoghegan p...@heroku.com wrote:
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:
Unfortunately nobody has the time/resources to do the kind of testing
required for a new
On 05/07/2014 11:13 AM, Peter Geoghegan wrote:
We ought to be realistic about the fact that the current
recommendations around sizing shared_buffers are nothing more than
folk wisdom. That's the best we have right now, but that seems quite
unsatisfactory to me.
So, as one of several people
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.com wrote:
*) raising shared buffers does not 'give more memory to postgres for
caching' -- it can only reduce it via double paging
That's absolutely not a necessary consequence. If pages are in s_b for a
while the OS will be
On Wed, May 7, 2014 at 2:40 PM, Josh Berkus j...@agliodbs.com wrote:
On 05/07/2014 11:13 AM, Peter Geoghegan wrote:
We ought to be realistic about the fact that the current
recommendations around sizing shared_buffers are nothing more than
folk wisdom. That's the best we have right now, but
On 2014-05-07 11:45:04 -0700, Peter Geoghegan wrote:
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.com wrote:
*) raising shared buffers does not 'give more memory to postgres for
caching' -- it can only reduce it via double paging
That's absolutely not a necessary
On Wed, May 7, 2014 at 2:49 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-05-07 11:45:04 -0700, Peter Geoghegan wrote:
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.com
wrote:
*) raising shared buffers does not 'give more memory to postgres for
caching' -- it
On Wed, May 7, 2014 at 11:40 AM, Josh Berkus j...@agliodbs.com wrote:
So, as one of several people who put literally hundreds of hours into
the original benchmarking which established the sizing recommendations
for shared_buffers (and other settings), I find the phrase folk wisdom
personally
On Wed, May 7, 2014 at 11:50 AM, Robert Haas robertmh...@gmail.com wrote:
But that does not mean, as the phrase folk
wisdom might be taken to imply, that we don't know anything at all
about what actually works well in practice.
Folk wisdom doesn't imply that. It implies that we think this
On 05/07/2014 11:52 AM, Peter Geoghegan wrote:
On Wed, May 7, 2014 at 11:40 AM, Josh Berkus j...@agliodbs.com wrote:
So, as one of several people who put literally hundreds of hours into
the original benchmarking which established the sizing recommendations
for shared_buffers (and other
On Wed, May 7, 2014 at 11:50 AM, Robert Haas robertmh...@gmail.com wrote:
Doesn't match my experience. Even with the current buffer manager
there's usually enough locality to keep important pages in s_b for a
meaningful time. I *have* seen workloads that should have fit into
memory not fit
On Wed, May 7, 2014 at 2:58 PM, Peter Geoghegan p...@heroku.com wrote:
On Wed, May 7, 2014 at 11:50 AM, Robert Haas robertmh...@gmail.com wrote:
But that does not mean, as the phrase folk
wisdom might be taken to imply, that we don't know anything at all
about what actually works well in
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:
On 05/06/2014 10:35 PM, Peter Geoghegan wrote:
+1. In my view, we probably should have set it to a much higher
absolute default value. The main problem with setting it to any
multiple of shared_buffers that I can see is
On 05/07/2014 01:36 PM, Jeff Janes wrote:
On Wed, May 7, 2014 at 11:04 AM, Josh Berkus j...@agliodbs.com wrote:
Unfortunately nobody has the time/resources to do the kind of testing
required for a new recommendation for shared_buffers.
I think it is worse than that. I don't think we know
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.comwrote:
On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
*) raising shared buffers does not 'give more memory to postgres for
caching' -- it can only reduce it via double paging
That's absolutely not a necessary
On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.comwrote:
On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
*) raising shared buffers does not 'give more memory to postgres for
caching' -- it can only reduce it via
On Wed, May 7, 2014 at 4:15 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
On Wed, May 7, 2014 at 11:38 AM, Andres Freund and...@2ndquadrant.comwrote:
On 2014-05-07 13:32:41 -0500, Merlin Moncure wrote:
*) raising shared buffers does not
On Wed, May 7, 2014 at 2:24 PM, Merlin Moncure mmonc...@gmail.com wrote:
right. This is, IMNSHO, exactly the sort of language that belongs in the
docs.
+1
--
Peter Geoghegan
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On 2014-05-07 16:24:53 -0500, Merlin Moncure wrote:
On Wed, May 7, 2014 at 4:15 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-05-07 13:51:57 -0700, Jeff Janes wrote:
On Wed, May 7, 2014 at 11:38 AM, Andres Freund
and...@2ndquadrant.comwrote:
On 2014-05-07 13:32:41 -0500,
On Wed, May 7, 2014 at 12:06 PM, Josh Berkus j...@agliodbs.com wrote:
For that matter, our advice on shared_buffers ... and our design for it
... is going to need to change radically soon, since Linux is getting an
ARC with a frequency cache as well as a recency cache, and FreeBSD and
On 8 October 2013 17:13, Bruce Momjian br...@momjian.us wrote:
Patch applied with a default of 4x shared buffers. I have added a 9.4
TODO that we might want to revisit this.
I certainly want to revisit this patch and this setting.
How can we possibly justify a default setting that could be
On 2014-05-06 15:09:15 +0100, Simon Riggs wrote:
On 8 October 2013 17:13, Bruce Momjian br...@momjian.us wrote:
Patch applied with a default of 4x shared buffers. I have added a 9.4
TODO that we might want to revisit this.
I certainly want to revisit this patch and this setting.
How
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
Apparently, you don't even understand what this parameter is for.
Setting it smaller than shared_buffers is insane.
On 6 May 2014 15:18, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
Apparently, you don't even understand what this parameter is
On 6 May 2014 15:17, Andres Freund and...@2ndquadrant.com wrote:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
That'd cause *massive* regression for many installations. Without
significantly overhauling
Simon Riggs si...@2ndquadrant.com writes:
On 6 May 2014 15:18, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
Apparently, you
On 2014-05-06 17:43:45 +0100, Simon Riggs wrote:
On 6 May 2014 15:17, Andres Freund and...@2ndquadrant.com wrote:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
That'd cause *massive* regression for
On 05/06/2014 08:41 AM, Simon Riggs wrote:
On 6 May 2014 15:18, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
Apparently, you
On Tue, May 6, 2014 at 7:18 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do with so many other performance parameters.
Apparently, you don't even understand what this
On 6 May 2014 18:08, Josh Berkus j...@agliodbs.com wrote:
On 05/06/2014 08:41 AM, Simon Riggs wrote:
On 6 May 2014 15:18, Tom Lane t...@sss.pgh.pa.us wrote:
Simon Riggs si...@2ndquadrant.com writes:
Lets fix e_c_s at 25% of shared_buffers and remove the parameter
completely, just as we do
On 6 May 2014 20:41, Jeff Janes jeff.ja...@gmail.com wrote:
The e_c_s is assumed to be usable for each backend trying to run queries
sensitive to it. If you have dozens of such queries running simultaneously
(not something I personally witness, but also not insane) and each of these
queries
On Tue, May 6, 2014 at 4:38 PM, Simon Riggs si...@2ndquadrant.com wrote:
I read the code, think what to say and then say what I think, not
rely on dogma.
I tried to help years ago by changing the docs on e_c_s, but that's
been mostly ignored down the years, as it is again here.
Well, for
On 05/06/2014 05:54 PM, Robert Haas wrote:
On Tue, May 6, 2014 at 4:38 PM, Simon Riggs si...@2ndquadrant.com wrote:
I read the code, think what to say and then say what I think, not
rely on dogma.
I tried to help years ago by changing the docs on e_c_s, but that's
been mostly ignored down
On 6 May 2014 22:54, Robert Haas robertmh...@gmail.com wrote:
On Tue, May 6, 2014 at 4:38 PM, Simon Riggs si...@2ndquadrant.com wrote:
I read the code, think what to say and then say what I think, not
rely on dogma.
I tried to help years ago by changing the docs on e_c_s, but that's
been
Robert Haas robertmh...@gmail.com writes:
I basically think the auto-tuning we've installed for
effective_cache_size is stupid. Most people are going to run with
only a few GB of shared_buffers, so setting effective_cache_size to a
small multiple of that isn't going to make many more people
On 05/06/2014 01:38 PM, Simon Riggs wrote:
Most of them? Really?
I didn't use the word most anywhere. So not really clear what you are
saying.
Sorry, those were supposed to be periods, not question marks. As in
Most of them. Really.
I have to tell you, your post sounds like you've
Robert, Tom:
On 05/06/2014 03:28 PM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
I basically think the auto-tuning we've installed for
effective_cache_size is stupid. Most people are going to run with
only a few GB of shared_buffers, so setting effective_cache_size to a
small
On 6 May 2014 23:47, Josh Berkus j...@agliodbs.com wrote:
If you're going to make
an argument in favor of different tuning advice, then do it based on
something in which you actually believe, based on hard evidence.
The proposed default setting of 4x shared_buffers is unprincipled
*and* lacks
On 6 May 2014 23:28, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I basically think the auto-tuning we've installed for
effective_cache_size is stupid. Most people are going to run with
only a few GB of shared_buffers, so setting effective_cache_size to a
On Tue, May 6, 2014 at 10:20 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 6 May 2014 23:47, Josh Berkus j...@agliodbs.com wrote:
If you're going to make
an argument in favor of different tuning advice, then do it based on
something in which you actually believe, based on hard evidence.
On 07/05/14 17:35, Peter Geoghegan wrote:
On Tue, May 6, 2014 at 10:20 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 6 May 2014 23:47, Josh Berkus j...@agliodbs.com wrote:
If you're going to make
an argument in favor of different tuning advice, then do it based on
something in which you
On Thu, Sep 5, 2013 at 05:14:37PM -0400, Bruce Momjian wrote:
On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared buffers. I had to
set
The patch contains a small typo in config.sgml. Probably just drop the
is from is can.
+results if this database cluster is can utilize most of the memory
Kevin.
On 8 October 2013 10:13, Bruce Momjian br...@momjian.us wrote:
On Thu, Sep 5, 2013 at 05:14:37PM -0400, Bruce Momjian
On Tue, Oct 8, 2013 at 01:04:18PM -0600, Kevin Hale Boyes wrote:
The patch contains a small typo in config.sgml. Probably just drop the is
from is can.
+results if this database cluster is can utilize most of the memory
Kevin.
Thank you, fixed.
--
Bruce Momjian
On Wed, Sep 11, 2013 at 3:40 PM, Josh Berkus j...@agliodbs.com wrote:
I think that most of the arguments in this thread drastically
overestimate the precision and the effect of effective_cache_size. The
planner logic behind it basically only uses it to calculate things
within a single index
On Fri, Sep 13, 2013 at 10:08 AM, Robert Haas robertmh...@gmail.com wrote:
On Wed, Sep 11, 2013 at 3:40 PM, Josh Berkus j...@agliodbs.com wrote:
I think that most of the arguments in this thread drastically
overestimate the precision and the effect of effective_cache_size. The
planner logic
On Fri, Sep 13, 2013 at 11:07 AM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
The stock documentation advice I probably needs to be revised to so
that's the lesser of 2GB and 25%.
I think that would be a pretty bad idea. There are lots of
On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
The stock documentation advice I probably needs to be revised to so
that's the lesser of 2GB and 25%.
I think that would be a pretty bad idea. There are lots of workloads
where people have postgres happily chugging along with s_b lots bigger
On 2013-09-13 11:27:03 -0500, Merlin Moncure wrote:
On Fri, Sep 13, 2013 at 11:07 AM, Andres Freund and...@2ndquadrant.com
wrote:
On 2013-09-13 10:50:06 -0500, Merlin Moncure wrote:
The stock documentation advice I probably needs to be revised to so
that's the lesser of 2GB and 25%.
I
Andres Freund and...@2ndquadrant.com wrote:
Absolutely not claiming the contrary. I think it sucks that we
couldn't fully figure out what's happening in detail. I'd love to
get my hand on a setup where it can be reliably reproduced.
I have seen two completely different causes for symptoms
On Fri, Sep 13, 2013 at 4:04 PM, Kevin Grittner kgri...@ymail.com wrote:
Andres Freund and...@2ndquadrant.com wrote:
Absolutely not claiming the contrary. I think it sucks that we
couldn't fully figure out what's happening in detail. I'd love to
get my hand on a setup where it can be reliably
On 2013-09-13 14:04:55 -0700, Kevin Grittner wrote:
Andres Freund and...@2ndquadrant.com wrote:
Absolutely not claiming the contrary. I think it sucks that we
couldn't fully figure out what's happening in detail. I'd love to
get my hand on a setup where it can be reliably reproduced.
I
On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
Merlin,
I vote 4x on the basis that for this setting (unlike almost all the
other memory settings) the ramifications for setting it too high
generally aren't too bad. Also, the o/s and temporary memory usage as
a share of
On Wed, Sep 11, 2013 at 09:18:30AM -0400, Bruce Momjian wrote:
On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
Merlin,
I vote 4x on the basis that for this setting (unlike almost all the
other memory settings) the ramifications for setting it too high
generally aren't
Bruce Momjian escribió:
So, are you saying you like 4x now?
Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
puts our effective_cache_size as 75% of RAM, giving us no room for
kernel, backend memory, and work_mem usage. If anything it should be
lower than 3x, not
On Wed, Sep 11, 2013 at 12:43:07PM -0300, Alvaro Herrera wrote:
Bruce Momjian escribió:
So, are you saying you like 4x now?
Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
puts our effective_cache_size as 75% of RAM, giving us no room for
kernel, backend
On 09/11/2013 08:27 AM, Bruce Momjian wrote:
On Wed, Sep 11, 2013 at 09:18:30AM -0400, Bruce Momjian wrote:
On Tue, Sep 10, 2013 at 03:08:24PM -0700, Josh Berkus wrote:
Another argument in favor: this is a default setting, and by default,
shared_buffers won't be 25% of RAM.
So, are you
On Wed, Sep 11, 2013 at 12:27 PM, Bruce Momjian br...@momjian.us wrote:
Another argument in favor: this is a default setting, and by default,
shared_buffers won't be 25% of RAM.
So, are you saying you like 4x now?
Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
On 2013-09-11 12:53:29 -0400, Bruce Momjian wrote:
On Wed, Sep 11, 2013 at 12:43:07PM -0300, Alvaro Herrera wrote:
Bruce Momjian escribió:
So, are you saying you like 4x now?
Here is an arugment for 3x. First, using the documented 25% of RAM, 3x
puts our effective_cache_size
I think that most of the arguments in this thread drastically
overestimate the precision and the effect of effective_cache_size. The
planner logic behind it basically only uses it to calculate things
within a single index scan. That alone shows that any precise
calculation cannot be very
On Mon, Sep 9, 2013 at 6:29 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
On 09/05/2013 03:30 PM, Merlin Moncure wrote:
Standard advice we've given in the past is 25% shared buffers, 75%
effective_cache_size. Which would make EFS
On Tue, Sep 10, 2013 at 11:39 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Mon, Sep 9, 2013 at 6:29 PM, Bruce Momjian br...@momjian.us wrote:
On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
On 09/05/2013 03:30 PM, Merlin Moncure wrote:
Standard advice we've given in the past is
Merlin,
I vote 4x on the basis that for this setting (unlike almost all the
other memory settings) the ramifications for setting it too high
generally aren't too bad. Also, the o/s and temporary memory usage as
a share of total physical memory has been declining over time
If we're doing
On Tue, Sep 10, 2013 at 5:08 PM, Josh Berkus j...@agliodbs.com wrote:
Merlin,
I vote 4x on the basis that for this setting (unlike almost all the
other memory settings) the ramifications for setting it too high
generally aren't too bad. Also, the o/s and temporary memory usage as
a share of
On Thu, Sep 5, 2013 at 09:02:27PM -0700, Josh Berkus wrote:
On 09/05/2013 03:30 PM, Merlin Moncure wrote:
Standard advice we've given in the past is 25% shared buffers, 75%
effective_cache_size. Which would make EFS *3X* shared_buffers, not 4X.
Maybe we're changing the conventional
Le jeudi 5 septembre 2013 17:14:37 Bruce Momjian a écrit :
On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared buffers. I had to
set effective_cache_size
Magnus Hagander mag...@hagander.net writes:
On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian br...@momjian.us wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared buffers. I had to
set effective_cache_size to its old 128MB
On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian br...@momjian.us wrote:
On Tue, Jan 8, 2013 at 08:40:44PM -0500, Andrew Dunstan wrote:
On 01/08/2013 08:08 PM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Tue, Jan 8, 2013 at 7:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
... And
On Thu, Sep 5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared buffers. I had to
set effective_cache_size to its old 128MB default so the EXPLAIN
regression tests would
On Thu, Sep 5, 2013 at 12:48:54PM -0400, Tom Lane wrote:
Magnus Hagander mag...@hagander.net writes:
On Thu, Sep 5, 2013 at 3:01 AM, Bruce Momjian br...@momjian.us wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared
On 09/05/2013 02:16 PM, Bruce Momjian wrote:
Well, the real problem with this patch is that it documents what the
auto-tuning algorithm is; without that commitment, just saying -1 means
autotune might be fine.
OK, but I did this based on wal_buffers, which has a -1 default, calls
it
On Thu, Sep 5, 2013 at 5:11 PM, Josh Berkus j...@agliodbs.com wrote:
On 09/05/2013 02:16 PM, Bruce Momjian wrote:
Well, the real problem with this patch is that it documents what the
auto-tuning algorithm is; without that commitment, just saying -1 means
autotune might be fine.
OK, but I did
On Thu, Sep 5, 2013 at 03:11:53PM -0700, Josh Berkus wrote:
On 09/05/2013 02:16 PM, Bruce Momjian wrote:
Well, the real problem with this patch is that it documents what the
auto-tuning algorithm is; without that commitment, just saying -1 means
autotune might be fine.
OK, but I did
On 09/05/2013 03:30 PM, Merlin Moncure wrote:
Standard advice we've given in the past is 25% shared buffers, 75%
effective_cache_size. Which would make EFS *3X* shared_buffers, not 4X.
Maybe we're changing the conventional calculation, but I thought I'd
point that out.
This was debated
On Tue, Jan 8, 2013 at 08:40:44PM -0500, Andrew Dunstan wrote:
On 01/08/2013 08:08 PM, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Tue, Jan 8, 2013 at 7:17 PM, Tom Lane t...@sss.pgh.pa.us wrote:
... And I don't especially like the idea of trying to
make it depend directly
Josh Berkus wrote:
The, shared_buffers, wal_buffers, and effective_cache_size (and possible
other future settings) can be set to -1. If they are set to -1, then we
use the figure:
shared_buffers = available_ram * 0.25
(with a ceiling of 8GB)
wal_buffers = available_ram * 0.05
(with a
On Wed, Jan 9, 2013 at 12:38 AM, Benedikt Grundmann
bgrundm...@janestreet.com wrote:
For what it is worth even if it is a dedicated database box 75% might be way
too high. I remember investigating bad performance on our biggest database
server, that in the end turned out to be a too high
On Wed, Jan 9, 2013 at 2:01 AM, Josh Berkus j...@agliodbs.com wrote:
All,
Well, the problem of find out the box's physical RAM is doubtless
solvable if we're willing to put enough sweat and tears into it, but
I'm dubious that it's worth the trouble. The harder part is how to know
if
1 - 100 of 112 matches
Mail list logo