Hi,
to my understanding it use to be "common sense" to limit shared buffers,
maybe around 32 GB.
due to ressources consumption of managing said cache.
I would like to know if, on a v15 with 256 GB RAM, setting shared buffers
to say 96 GB would benefit large BI queries ?
Or to say it d
and when we did explain analyze buffers, what we noticed
that there are huge difference in shared buffers hit for this query on the
replica vs the primary, please find the details below:
on primary:
https://explain.depesz.com/s/TuMD
on replica:
https://explain.depesz.com/s/auJp
Note the Buffers
On 07/18/2018 10:43 AM, Andreas Kretschmer wrote:
>
>
> Am 18.07.2018 um 10:26 schrieb Hans Schou:
>> Am I doing something wrong or should some history be cleared?
>
> Reset the stats for that database. You can check the date of last reset
> with:
>
> select stats_reset from pg_stat_database
On Wed, Jul 18, 2018 at 10:44 AM Andreas Kretschmer
wrote:
>
> ||pg_stat_reset()
>
Thanks, I guess we can see the result in a few days.
BTW, strang command: it only reset current database and it can't take db as
parameter.
Am 18.07.2018 um 10:26 schrieb Hans Schou:
Am I doing something wrong or should some history be cleared?
Reset the stats for that database. You can check the date of last reset
with:
select stats_reset from pg_stat_database where datname = 'database_name';
and reset it with:
Hi
I have this system with some databases and I have run the
cache_hit_ratio.sql script on it. It showed that the db acme777booking had
a ratio on 85%. I then changed shared_buffer size from 0.5GB to 4GB as the
server has 16GB of physical RAM. After 6 days of running I checked the
ratio again and