On Wed, Aug 11, 2021 at 06:52:15PM +0530, Vijaykumar Jain wrote:
>  Just taking a shot, as I have seen in some previous issues? Ignore is not
> relevant.
> 
> Can you run vacuum on pg_class and  check the query again , or do you see
> pg_class bloated ?

pg_class is large, but vacuuming it didn't help for time of query on
pg_stat_database.

vacuum output:
#v+
=# vacuum verbose analyze pg_class ;                                            
            
INFO:  vacuuming "pg_catalog.pg_class"                           
INFO:  scanned index "pg_class_oid_index" to remove 3632 row versions           
                                        
DETAIL:  CPU: user: 0.06 s, system: 0.00 s, elapsed: 0.06 s                     
                     
INFO:  scanned index "pg_class_relname_nsp_index" to remove 3632 row versions
DETAIL:  CPU: user: 0.16 s, system: 0.17 s, elapsed: 0.46 s                     
          
INFO:  scanned index "pg_class_tblspc_relfilenode_index" to remove 3632 row 
versions
DETAIL:  CPU: user: 0.08 s, system: 0.01 s, elapsed: 0.10 s                     
                                       
INFO:  "pg_class": removed 3632 row versions in 662 pages                       
                      
DETAIL:  CPU: user: 0.09 s, system: 0.00 s, elapsed: 0.09 s
INFO:  index "pg_class_oid_index" now contains 1596845 row versions in 11879 
pages                               
DETAIL:  3632 index row versions were removed.   
852 index pages have been deleted, 835 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "pg_class_relname_nsp_index" now contains 1596845 row versions in 
64591 pages
DETAIL:  3627 index row versions were removed.
588 index pages have been deleted, 574 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "pg_class_tblspc_relfilenode_index" now contains 1596845 row 
versions in 12389 pages
DETAIL:  3632 index row versions were removed.                                 
941 index pages have been deleted, 918 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_class": found 1226 removable, 59179 nonremovable row versions in 
1731 out of 56171 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 1556677295
There were 42246 unused item identifiers.
Skipped 0 pages due to buffer pins, 13921 frozen pages.
0 pages are entirely empty.
CPU: user: 0.62 s, system: 0.19 s, elapsed: 0.94 s.
INFO:  analyzing "pg_catalog.pg_class"
INFO:  "pg_class": scanned 30000 of 56171 pages, containing 853331 live rows 
and 0 dead rows; 30000 rows in sample, 1597749 estimated total rows
VACUUM
Time: 2687.170 ms (00:02.687)
#v-

> The other option would be gdb backtrace I think that would help.

backtrace from what? It doesn't *break*, it just takes strangely long time.

I could envision attaching gdb to pg process and getting backtrace, but when?
before running the query? after?

depesz


Reply via email to