On Mon, Nov  5, 2018 at 12:50:01PM +0100, Peter Eisentraut wrote:
> On 16/10/2018 17:38, Bruce Momjian wrote:
> > diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
> > index 2317e8b..e471d7f 100644
> > --- a/src/backend/utils/misc/guc.c
> > +++ b/src/backend/utils/misc/guc.c
> > @@ -2987,10 +2987,9 @@ static struct config_int ConfigureNamesInt[] =
> >  
> >     {
> >             {"effective_cache_size", PGC_USERSET, QUERY_TUNING_COST,
> > -                   gettext_noop("Sets the planner's assumption about the 
> > size of the disk cache."),
> > -                   gettext_noop("That is, the portion of the kernel's disk 
> > cache that "
> > -                                            "will be used for PostgreSQL 
> > data files. This is measured in disk "
> > -                                            "pages, which are normally 8 
> > kB each."),
> > +                   gettext_noop("Sets the planner's assumption about the 
> > size of the data cache."),
> > +                   gettext_noop("That is, the size of the cache used for 
> > PostgreSQL data files. "
> > +                                            "This is measured in disk 
> > pages, which are normally 8 kB each."),
> >                     GUC_UNIT_BLOCKS,
> >             },
> >             &effective_cache_size,
> 
> This change completely loses the context that this is the kernel's/host
> system's memory size.  What is "data cache"?  I think this is a bad
> change.  I know it's confusing, but the old description at least had
> some basis in terms that are known to the user.

Well, the change as outlined in the email is that effective_cache_size
is a combination of shared_buffers and kernel cache size, which I think
the docs now make clear.  Do you have better wording for the GUC?

-- 
  Bruce Momjian  <br...@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

Reply via email to