Try run postgresqltuner.pl as suggested on
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and also
look at the other info there.

After running a few days with live data run cache_hit_ratio.sql by Melvin
Davidson:
SELECT pg_stat_database.datname,
       pg_stat_database.blks_read,
       pg_stat_database.blks_hit,
       round((pg_stat_database.blks_hit::double precision
              / (pg_stat_database.blks_read
                 + pg_stat_database.blks_hit
                 +1)::double precision * 100::double precision)::numeric,
2) AS cachehitratio
   FROM pg_stat_database
  WHERE pg_stat_database.datname !~ '^(template(0|1)|postgres)$'::text
  ORDER BY round((pg_stat_database.blks_hit::double precision
                 / (pg_stat_database.blks_read
                    + pg_stat_database.blks_hit
                    + 1)::double precision * 100::double
precision)::numeric, 2) DESC;

The real question is: Is your system slow?


On Sun, Jun 30, 2019 at 5:14 AM Daulat Ram <daulat....@exponential.com>
wrote:

> Hi team,
>
>
>
> Can you please suggest what will be  the suitable memory settings for
> Postgresql11 if we have 80gb RAM, 16 CPU’s and OS  Linux.
>
>
>
> If we set 25 % of total RAM then shared_buffers value will be 20GB. Will
> it be useful or we can set it any random vale like 8g or 12gb.
>
>
>
> According to https://pgtune.leopard.in.ua/#/
>
> below are the suggested memory  values for 80gb RAM and 16 CPU.  I assume
> the values preferred for effective_cache_size = 60GB and shared_buffers =
> 20GB are too large.
>
>
>
> max_connections = 500
>
> shared_buffers = 20GB
>
> effective_cache_size = 60GB
>
> maintenance_work_mem = 2GB
>
> checkpoint_completion_target = 0.7
>
> wal_buffers = 16MB
>
> default_statistics_target = 100
>
> random_page_cost = 1.1
>
> effective_io_concurrency = 300
>
> work_mem = 6553kB
>
> min_wal_size = 1GB
>
> max_wal_size = 2GB
>
> max_worker_processes = 16
>
> max_parallel_workers_per_gather = 8
>
> max_parallel_workers = 16
>
>
>
> Please give your suggestions.
>
>
>
> Regards,
>
> Daulat
>
>
>

Reply via email to