Re: slow query

2020-04-03 Thread dangal
Justin thank you very much for your answer, as you can also see the number of
rows differs a lot
I attach the complete explain, do not attach it because it is large

"HashAggregate  (cost=12640757.46..12713163.46 rows=385 width=720) (actual
time=1971962.023..1971962.155 rows=306 loops=1)"
"  Output: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,
((SubPlan 1)), ((SubPlan 2)), a2.ent_inst_att_str_value, ba.att_value_1,
depto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,
att_nro.ent_inst_att_str_value, att_bis (...)"
"  Group Key: bi.bus_ent_inst_name_num, bi.att_value_num_7, bi.att_value_10,
(SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value, ba.att_value_1,
depto2.att_value_1, loc2.att_value_1, att_call.ent_inst_att_str_value,
att_nro.ent_inst_att_str_value, att_bis. (...)"
"  Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2"
"  ->  Nested Loop  (cost=4347.52..12640740.13 rows=385 width=720)
(actual time=1906401.083..1971959.176 rows=306 loops=1)"
"Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,
bi.att_value_10, (SubPlan 1), (SubPlan 2), a2.ent_inst_att_str_value,
ba.att_value_1, depto2.att_value_1, loc2.att_value_1,
att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value, att_b
(...)"
"Buffers: shared hit=5817744 read=1034292 dirtied=790, local hit=2"
"->  Hash Join  (cost=4346.94..12228344.41 rows=1427 width=704)
(actual time=1906372.468..1964409.907 rows=306 loops=1)"
"  Output: bi.bus_ent_inst_name_num, bi.att_value_num_7,
bi.att_value_10, ba.bus_ent_inst_id_auto, ba.att_value_1,
att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,
att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value, a (...)"
"  Hash Cond: (ba.att_value_num_1 =
(bi.bus_ent_inst_name_num)::numeric)"
"  Buffers: shared hit=5814458 read=1033324 dirtied=790, local
hit=2"
"  ->  Hash Right Join  (cost=4339.65..12172907.42
rows=886647 width=158) (actual time=1906344.617..1963668.889 rows=3362294
loops=1)"
"Output: ba.bus_ent_inst_id_auto, ba.att_value_1,
ba.att_value_num_1, att_call.ent_inst_att_str_value,
att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,
att_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a
(...)"
"Hash Cond: ((att_barr.env_id = ba.env_id) AND
(att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))"
"Buffers: shared hit=5814458 read=1033324 dirtied=790"
"->  Index Only Scan using ix_bus_ent_inst_attr_03 on
public.bus_ent_inst_attribute att_barr  (cost=0.83..1024093.06 rows=4508264
width=24) (actual time=10.435..52888.091 rows=4244011 loops=1)"
"  Output: att_barr.att_id,
att_barr.ent_inst_att_str_value, att_barr.env_id, att_barr.bus_ent_inst_id,
att_barr.reg_status"
"  Index Cond: (att_barr.att_id = 1115)"
"  Heap Fetches: 120577"
"  Buffers: shared hit=503194 read=31197 dirtied=5"
"->  Hash  (cost=11101039.12..11101039.12 rows=886647
width=146) (actual time=1906329.888..1906329.888 rows=3362294 loops=1)"
"  Output: ba.bus_ent_inst_id_auto, ba.att_value_1,
ba.env_id, ba.att_value_num_1, att_call.ent_inst_att_str_value,
att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,
att_pad.ent_inst_att_str_value, att_manz.ent_inst_att (...)"
"  Buckets: 4194304 (originally 1048576)  Batches: 1
(originally 1)  Memory Usage: 396824kB"
"  Buffers: shared hit=5311264 read=1002127
dirtied=785"
"  ->  Hash Right Join 
(cost=10328938.09..11101039.12 rows=886647 width=146) (actual
time=1867557.718..1904218.946 rows=3362294 loops=1)"
"Output: ba.bus_ent_inst_id_auto,
ba.att_value_1, ba.env_id, ba.att_value_num_1,
att_call.ent_inst_att_str_value, att_nro.ent_inst_att_str_value,
att_bis.ent_inst_att_str_value, att_pad.ent_inst_att_str_value,
att_manz.ent_in (...)"
"Hash Cond: ((att_apt.env_id = ba.env_id)
AND (att_apt.bus_ent_inst_id = ba.bus_ent_inst_id_auto))"
"Buffers: shared hit=5311264 read=1002127
dirtied=785"
"->  Index Only Scan using
ix_bus_ent_inst_attr_03 on public.bus_ent_inst_attribute att_apt 
(cost=0.83..746958.06 rows=3287982 width=24) (actual time=0.091..32788.731
rows=3491599 loops=1)"
"  Output: att_apt.att_id,
att_apt.ent_inst_att_str_value, att_apt.env_id, att_apt.bus_ent_inst_id,
att_apt.reg_status"
"  Index Cond: (att_apt.att_id = 1113)"
"  Heap Fetches: 88910"
"  Buffers: shared hit=178090 read=25341
dirtied=5"
"->  Hash  

slow query

2020-04-03 Thread dangal
Dear I have a question to ask you
I am having a slow problem with a query and I am seeing with the explain
that the current cost and time differ by 4 times
Postgres version 9.5.16 in centos 7.6
To try to solve this run the statistics to the table and the same problem
remains
It's a very big table 2 billion tuples
Do you have any idea what I can do to improve
Thank you very much for your time
Any data you need I can provide

I share a part of the explain

 Hash Right Join  (cost=4339.65..12172907.42 rows=886647 width=158)
(actual time=1906344.617..1963668.889 rows=3362294 loops=1)"
"Output: ba.bus_ent_inst_id_auto, ba.att_value_1,
ba.att_value_num_1, att_call.ent_inst_att_str_value,
att_nro.ent_inst_att_str_value, att_bis.ent_inst_att_str_value,
att_pad.ent_inst_att_str_value, att_manz.ent_inst_att_str_value, att_a
(...)"
"Hash Cond: ((att_barr.env_id = ba.env_id) AND
(att_barr.bus_ent_inst_id = ba.bus_ent_inst_id_auto))"
"Buffers: shared hit=5814458 read=1033324 dirtied=790"





--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




Re: pg_stat_bgwriter

2019-10-17 Thread dangal
thank you very much justin, i am seeing  install the product you recommended
me!



--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




Re: pg_stat_bgwriter

2019-10-17 Thread dangal
thank you very much jeff I'll see with the team that manages the operating
system to see if they can help me with this data that you have given me



--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




Re: pg_stat_bgwriter

2019-10-16 Thread dangal
Excuse me, can you tell me how can I achieve this?

"The question is how that compared to database size, and size of the
active set (fraction of the database accessed by the application /
queries)."




--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




Re: pg_stat_bgwriter

2019-10-16 Thread dangal
Hi Tomas, restart the statistics and take 24-hour samples to see if you can
help me
24 gb server memory  6 gb sharred buffers 

# select  now,
# pg_size_pretty(buffers_checkpoint*8192)AS buffers_checkpoint,
# pg_size_pretty(buffers_clean*8192)AS buffers_clean,
# pg_size_pretty(buffers_backend*8192)AS buffers_backend,
#
(buffers_checkpoint*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS
buffers_checkpoint_pct,
# (buffers_clean*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS
buffers_clean_pct,
# (buffers_backend*100)/(buffers_checkpoint+buffers_clean+buffers_backend)AS
buffers_backend_pct,
# pg_size_pretty(buffers_checkpoint * 8192 /(checkpoints_timed +
checkpoints_req)) AS avg_checkpoint_write,
# pg_size_pretty(8192 *(buffers_checkpoint + buffers_clean +
buffers_backend)) AS total_write
# from pg_stat_bgwriter_snapshot
# ;
  now  | buffers_checkpoint | buffers_clean |
buffers_backend | buffers_checkpoint_pct | buffers_clean_pct |
buffers_backend_pct | avg_checkpoint_write | total_write 
---++---+-++---+-+--+-
 2019-10-15 15:00:02.070105-03 | 33 MB  | 1190 MB   | 144 MB
 
|  2 |87 |  10 | 33 MB  
 
| 1367 MB
 2019-10-15 16:00:01.477785-03 | 109 MB | 3543 MB   | 393 MB
 
|  2 |87 |   9 | 36 MB  
 
| 4045 MB
 2019-10-15 17:00:01.960162-03 | 179 MB | 6031 MB   | 703 MB
 
|  2 |87 |  10 | 36 MB  
 
| 6913 MB
 2019-10-15 18:00:01.558404-03 | 252 MB | 8363 MB   | 1000
MB |  2 |86 | 
10 | 36 MB| 9615 MB
 2019-10-15 19:00:01.170866-03 | 327 MB | 10019 MB  | 1232
MB |  2 |86 | 
10 | 36 MB| 11 GB
 2019-10-15 20:00:01.397473-03 | 417 MB | 11 GB | 1407
MB |  3 |85 | 
10 | 38 MB| 13 GB
 2019-10-15 21:00:01.211047-03 | 522 MB | 12 GB | 1528
MB |  3 |85 | 
11 | 40 MB| 14 GB
 2019-10-15 22:00:01.164853-03 | 658 MB | 12 GB | 1691
MB |  4 |83 | 
11 | 44 MB| 14 GB
 2019-10-15 23:00:01.116564-03 | 782 MB | 13 GB | 1797
MB |  5 |83 | 
11 | 46 MB| 15 GB
 2019-10-16 00:00:01.19203-03  | 887 MB | 13 GB | 2016
MB |  5 |82 | 
12 | 47 MB| 16 GB
 2019-10-16 01:00:01.329851-03 | 1003 MB| 14 GB | 2104
MB |  5 |81 | 
12 | 48 MB| 17 GB
 2019-10-16 02:00:01.518606-03 | 1114 MB| 14 GB | 
MB |  6 |81 | 
12 | 48 MB| 17 GB
 2019-10-16 03:00:01.673498-03 | 1227 MB| 14 GB | 2314
MB |  6 |80 | 
12 | 49 MB| 18 GB
 2019-10-16 04:00:01.936604-03 | 1354 MB| 15 GB | 2468
MB |  7 |79 | 
12 | 50 MB| 19 GB
 2019-10-16 05:00:01.854888-03 | 1465 MB| 15 GB | 2518
MB |  7 |79 | 
13 | 51 MB| 19 GB
 2019-10-16 06:00:01.804182-03 | 1585 MB| 15 GB | 2581
MB |  8 |78 | 
13 | 51 MB| 19 GB
 2019-10-16 07:00:01.889345-03 | 1677 MB| 15 GB | 2649
MB |  8 |78 | 
13 | 51 MB| 20 GB
 2019-10-16 08:00:01.248247-03 | 1756 MB| 16 GB | 2707
MB |  8 |78 | 
13 | 50 MB| 20 GB
 2019-10-16 09:00:01.258408-03 | 1826 MB| 16 GB | 2763
MB |  8 |78 | 
13 | 49 MB| 21 GB
 2019-10-16 10:00:01.418323-03 | 1881 MB| 17 GB | 2872
MB |  8 |78 | 
13 | 48 MB| 21 GB
 2019-10-16 11:00:02.077084-03 | 1951 MB| 18 GB 

Re: pg_stat_bgwriter

2019-10-14 Thread dangal
Thanks a lot, always helping
I attached a snapshot that I take every 12 hours of the pg_stat_bgwriter

select now,buffers_checkpoint,buffers_clean, buffers_backend from
pg_stat_bgwriter_snapshot;
  now  | buffers_checkpoint | buffers_clean |
buffers_backend 
---++---+-
 2019-10-07 12:00:01.312067-03 | 288343 |   1182944 | 
520101
 2019-10-08 00:00:02.034129-03 | 475323 |   3890772 | 
975454
 2019-10-08 12:00:01.500756-03 | 616154 |   4774924 |
1205261
 2019-10-09 00:00:01.520329-03 | 784840 |   731 |
1601278
 2019-10-09 12:00:01.388113-03 |1149560 |   8395288 |
2456249
 2019-10-10 00:00:01.841054-03 |1335747 |  11023014 |
2824740
 2019-10-10 12:00:01.354555-03 |1486963 |  11919462 |
2995211
 2019-10-11 00:00:01.519538-03 |1649066 |  14400593 |
3360700
 2019-10-11 12:00:01.468203-03 |1979781 |  15332086 |
4167663
 2019-10-12 00:00:01.343714-03 |2161116 |  17791871 |
4525957
 2019-10-12 12:00:01.991429-03 |2323194 |  18324723 |
5139418
 2019-10-13 00:00:01.251191-03 |2453939 |  19059149 |
5306894
 2019-10-13 12:00:01.677379-03 |2782606 |  19391676 |
5878981
 2019-10-14 00:00:01.824249-03 |2966021 |  19915346 |
6040316
 2019-10-14 12:00:01.869126-03 |3117659 |  20675018 |
6184214
 
I tell you that we have a server with 24 gb of ram and 6gb of shared_buffers
When you tell me that maybe I am running too low of shared_buffers, the
query I run to see what is happening is the following:
The first 10 are insert, update and an autovaccum

select calls, shared_blks_hit, shared_blks_read, shared_blks_dirtied
  from pg_stat_statements
  where shared_blks_dirtied> 0 order by shared_blks_dirtied desc
  limit 10
 

  calls   | shared_blks_hit | shared_blks_read | shared_blks_dirtied 
---+-+--+-
  41526844 |  1524091324 | 74477743 |40568348
  22707516 |  1317743612 | 33153916 |28106071
517309 |   539285911 | 24583841 |24408950
23 |23135504 |187638126 |15301103
  11287105 |   383864219 | 18369813 |13879956
   2247661 |   275357344 |  9252598 | 6084363
  13070036 |   244904154 |  5557321 | 5871613
  54158879 |   324425993 |  5054200 | 4676472
  24955177 |   125421833 |  5775788 | 4517367
 142807488 | 14401507751 | 81965894 | 2661358
(10 filas)

Another query

SELECT pg_size_pretty(count(*) * 8192) as buffered,
   round(100.0 * count(*) /
 (SELECT setting FROM pg_settings WHERE name = 'shared_buffers')
 ::integer,
 1) AS buffers_percent,
   round(100.0 * count(*) * 8192 / pg_table_size(c.oid), 1) AS
percent_of_relation
  FROM pg_class c
 INNER JOIN pg_buffercache b
ON b.relfilenode = c.relfilenode
 INNER JOIN pg_database d
ON (b.reldatabase = d.oid AND d.datname = current_database())
 GROUP BY c.oid, c.relname
 ORDER BY 3 DESC LIMIT 10;

bufferedbuffers_percent   percent_of_relation
3938 MB;64.1;   53.2
479 MB; 7.8;21.3
261 MB; 4.3;99.3
163 MB; 2.6;0.1
153 MB; 2.5;6.7
87 MB;  1.4;1.2
82 MB;  1.3;81.6
65 MB;  1.1;100.0
64 MB;  1.0;0.1
53 MB;  0.9;73.5



--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




pg_stat_bgwriter

2019-10-13 Thread dangal
Dear I would like to share with you to see what you think about the
statistics of pg_stat_bgwriter

postgres = # select * from pg_stat_bgwriter;
 checkpoints_timed | checkpoints_req | checkpoint_write_time |
checkpoint_sync_time | buffers_checkpoint | buffers_clean | maxwritten_clean
| buffers_backend | buffers_
backend_fsync | buffers_alloc | stats_reset
--- + - +  --- +
-- + --- - + --- +
-- + -  + -
-- + --- + --- 
   338 | 6 | 247061792 | 89418 | 2939561 | 19872289 | 54876 |
6015787 |
0 | 710682240 | 2019-10-06 19: 25: 30.688186-03
(1 row)

postgres = # show bgwriter_delay;
 bgwriter_delay

 200ms
(1 row)

postgres = # show bgwriter_lru_maxpages;
 bgwriter_lru_maxpages
---
 100
(1 row)

postgres = # show bgwriter_lru_multiplier;
 bgwriter_lru_multiplier
-
 2
(1 row)


Do you think it should increase bgwriter_lru_maxpages due to the value of
maxwritten_clean?
Do you think it should increase bgwriter_lru_maxpages,
bgwriter_lru_multiplier, and decrease bgwriter_delay due to the value of
buffers_backend compared to buffers_alloc?
Do you think a modification is necessary?
What values ​​would you recommend?
thank you



--
Sent from: 
https://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html




Re: autovacuum big table taking hours and sometimes seconds

2019-02-06 Thread dangal
Would it be nice to start changing those values ​​found in the default
postgres.conf so low?



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html



Re: Memory size

2018-03-12 Thread dangal

I was seeing thanks to your recommendations and I found the following, to
see what you think

cache hit rate  0.99637443599712620769 


We have the default values 5 minutes

total checkpoint   minutes beetween checkpoint 
26927  0.358545045634493

temp_files temp_size (in 10 days)
16870   171 GB

believe that the problem may come here?




--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html