OOM Killer kills PostgreSQL

2020-05-20 Thread Piotr Włodarczyk
Hi folks,

We met unexpected PostgreSQL shutdown. After a little investigation we've
discovered that problem is in OOM killer which kills our PostgreSQL.
Unfortunately we can't find query on DB causing this problem. Log is as
below:

May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da,
order=0, oom_score_adj=-1000
May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0
May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted
3.10.0-1127.el7.x86_64 #1
May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1
01/01/2011
May 05 09:05:34 HOST kernel: Call Trace:
May 05 09:05:34 HOST kernel:  [] dump_stack+0x19/0x1b
May 05 09:05:34 HOST kernel:  [] dump_header+0x90/0x229
May 05 09:05:34 HOST kernel:  [] ?
cred_has_capability+0x6b/0x120
May 05 09:05:34 HOST kernel:  []
oom_kill_process+0x25e/0x3f0
May 05 09:05:35 HOST kernel:  [] ?
cpuset_mems_allowed_intersects+0x21/0x30
May 05 09:05:40 HOST kernel:  [] ?
oom_unkillable_task+0xcd/0x120
May 05 09:05:42 HOST kernel:  [] ?
find_lock_task_mm+0x56/0xc0
May 05 09:05:42 HOST kernel:  [] out_of_memory+0x4b6/0x4f0
May 05 09:05:42 HOST kernel:  []
__alloc_pages_slowpath+0x5db/0x729
May 05 09:05:42 HOST kernel:  []
__alloc_pages_nodemask+0x436/0x450
May 05 09:05:42 HOST kernel:  []
alloc_pages_current+0x98/0x110
May 05 09:05:42 HOST kernel:  []
__page_cache_alloc+0x97/0xb0
May 05 09:05:42 HOST kernel:  [] filemap_fault+0x270/0x420
May 05 09:05:42 HOST kernel:  []
ext4_filemap_fault+0x36/0x50 [ext4]
May 05 09:05:42 HOST kernel:  []
__do_fault.isra.61+0x8a/0x100
May 05 09:05:42 HOST kernel:  []
do_read_fault.isra.63+0x4c/0x1b0
May 05 09:05:42 HOST kernel:  []
handle_mm_fault+0xa20/0xfb0
May 05 09:05:42 HOST kernel:  []
__do_page_fault+0x213/0x500
May 05 09:05:42 HOST kernel:  []
trace_do_page_fault+0x56/0x150
May 05 09:05:42 HOST kernel:  []
do_async_page_fault+0x22/0xf0
May 05 09:05:42 HOST kernel:  []
async_page_fault+0x28/0x30
May 05 09:05:42 HOST kernel: Mem-Info:
May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069
isolated_anon:0
active_file:653
inactive_file:412 isolated_file:75
unevictable:0 dirty:0
writeback:0 unstable:0
slab_reclaimable:120624
slab_unreclaimable:14538
mapped:814755 shmem:816586
pagetables:60496 bounce:0
free:30218 free_pcp:562
free_cma:0

Can You tell me how to find problematic query? Or how to "pimp"
configuration to let db be alive and let us find problematic query?

-- 

Pozdrawiam
Piotr Włodarczyk


Re: OOM Killer kills PostgreSQL

2020-05-20 Thread Laurenz Albe
On Wed, 2020-05-20 at 09:30 +0200, Piotr Włodarczyk wrote:
> We met unexpected PostgreSQL shutdown. After a little investigation
> we've discovered that problem is in OOM killer which kills our PostgreSQL.
> Unfortunately we can't find query on DB causing this problem. Log is as below:

Is there nothing in the PostgreSQL log?

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: OOM Killer kills PostgreSQL

2020-05-20 Thread Piotr Włodarczyk
Nothing special. I'll check it agin after next dead

On Wed, May 20, 2020 at 10:22 AM Laurenz Albe 
wrote:

> On Wed, 2020-05-20 at 09:30 +0200, Piotr Włodarczyk wrote:
> > We met unexpected PostgreSQL shutdown. After a little investigation
> > we've discovered that problem is in OOM killer which kills our
> PostgreSQL.
> > Unfortunately we can't find query on DB causing this problem. Log is as
> below:
>
> Is there nothing in the PostgreSQL log?
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>

-- 

Pozdrawiam
Piotr Włodarczyk


Re: OOM Killer kills PostgreSQL

2020-05-20 Thread Fabio Pardi
Maybe your memory budget does not meet the RAM on the machine?

The problem is not in the query you are looking for, but in the settings you 
are using for Postgres.

regards,

fabio pardi



On 20/05/2020 09:30, Piotr Włodarczyk wrote:
> Hi folks,
>
> We met unexpected PostgreSQL shutdown. After a little investigation we've 
> discovered that problem is in OOM killer which kills our PostgreSQL. 
> Unfortunately we can't find query on DB causing this problem. Log is as below:
>
> May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, 
> order=0, oom_score_adj=-1000
> May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0
> May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 
> 3.10.0-1127.el7.x86_64 #1
> May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> May 05 09:05:34 HOST kernel: Call Trace:
> May 05 09:05:34 HOST kernel:  [] dump_stack+0x19/0x1b
> May 05 09:05:34 HOST kernel:  [] dump_header+0x90/0x229
> May 05 09:05:34 HOST kernel:  [] ? 
> cred_has_capability+0x6b/0x120
> May 05 09:05:34 HOST kernel:  [] 
> oom_kill_process+0x25e/0x3f0
> May 05 09:05:35 HOST kernel:  [] ? 
> cpuset_mems_allowed_intersects+0x21/0x30
> May 05 09:05:40 HOST kernel:  [] ? 
> oom_unkillable_task+0xcd/0x120
> May 05 09:05:42 HOST kernel:  [] ? 
> find_lock_task_mm+0x56/0xc0
> May 05 09:05:42 HOST kernel:  [] out_of_memory+0x4b6/0x4f0
> May 05 09:05:42 HOST kernel:  [] 
> __alloc_pages_slowpath+0x5db/0x729
> May 05 09:05:42 HOST kernel:  [] 
> __alloc_pages_nodemask+0x436/0x450
> May 05 09:05:42 HOST kernel:  [] 
> alloc_pages_current+0x98/0x110
> May 05 09:05:42 HOST kernel:  [] 
> __page_cache_alloc+0x97/0xb0
> May 05 09:05:42 HOST kernel:  [] filemap_fault+0x270/0x420
> May 05 09:05:42 HOST kernel:  [] 
> ext4_filemap_fault+0x36/0x50 [ext4]
> May 05 09:05:42 HOST kernel:  [] 
> __do_fault.isra.61+0x8a/0x100
> May 05 09:05:42 HOST kernel:  [] 
> do_read_fault.isra.63+0x4c/0x1b0
> May 05 09:05:42 HOST kernel:  [] handle_mm_fault+0xa20/0xfb0
> May 05 09:05:42 HOST kernel:  [] __do_page_fault+0x213/0x500
> May 05 09:05:42 HOST kernel:  [] 
> trace_do_page_fault+0x56/0x150
> May 05 09:05:42 HOST kernel:  [] 
> do_async_page_fault+0x22/0xf0
> May 05 09:05:42 HOST kernel:  [] async_page_fault+0x28/0x30
> May 05 09:05:42 HOST kernel: Mem-Info:
> May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 
> isolated_anon:0
>                                                 active_file:653 
> inactive_file:412 isolated_file:75
>                                                 unevictable:0 dirty:0 
> writeback:0 unstable:0
>                                                 slab_reclaimable:120624 
> slab_unreclaimable:14538
>                                                 mapped:814755 shmem:816586 
> pagetables:60496 bounce:0
>                                                 free:30218 free_pcp:562 
> free_cma:0
>
> Can You tell me how to find problematic query? Or how to "pimp" configuration 
> to let db be alive and let us find problematic query?
>
> -- 
>
> Pozdrawiam
> Piotr Włodarczyk



Re: OOM Killer kills PostgreSQL

2020-05-20 Thread Justin Pryzby
What postgres version ?  What environment (RAM) and config ?
https://wiki.postgresql.org/wiki/Server_Configuration

I think you can probably find more info in dmesg/syslog ; probably a
line saying "OOM killed ..." showing which PID and its vsz.

Are you able to see some particular process continuously growing (like
in top or ps) ?

Do you have full query logs enabled to help determine which pid/query
was involved ?
log_statement=all log_min_messages=info log_checkpoints=on
log_lock_waits=on log_temp_files=0



On Wed, May 20, 2020 at 2:31 AM Piotr Włodarczyk
 wrote:
>
> Hi folks,
>
> We met unexpected PostgreSQL shutdown. After a little investigation we've 
> discovered that problem is in OOM killer which kills our PostgreSQL. 
> Unfortunately we can't find query on DB causing this problem. Log is as below:
>
> May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, 
> order=0, oom_score_adj=-1000
> May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0
> May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 
> 3.10.0-1127.el7.x86_64 #1
> May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> May 05 09:05:34 HOST kernel: Call Trace:
> May 05 09:05:34 HOST kernel:  [] dump_stack+0x19/0x1b
> May 05 09:05:34 HOST kernel:  [] dump_header+0x90/0x229
> May 05 09:05:34 HOST kernel:  [] ? 
> cred_has_capability+0x6b/0x120
> May 05 09:05:34 HOST kernel:  [] 
> oom_kill_process+0x25e/0x3f0
> May 05 09:05:35 HOST kernel:  [] ? 
> cpuset_mems_allowed_intersects+0x21/0x30
> May 05 09:05:40 HOST kernel:  [] ? 
> oom_unkillable_task+0xcd/0x120
> May 05 09:05:42 HOST kernel:  [] ? 
> find_lock_task_mm+0x56/0xc0
> May 05 09:05:42 HOST kernel:  [] out_of_memory+0x4b6/0x4f0
> May 05 09:05:42 HOST kernel:  [] 
> __alloc_pages_slowpath+0x5db/0x729
> May 05 09:05:42 HOST kernel:  [] 
> __alloc_pages_nodemask+0x436/0x450
> May 05 09:05:42 HOST kernel:  [] 
> alloc_pages_current+0x98/0x110
> May 05 09:05:42 HOST kernel:  [] 
> __page_cache_alloc+0x97/0xb0
> May 05 09:05:42 HOST kernel:  [] filemap_fault+0x270/0x420
> May 05 09:05:42 HOST kernel:  [] 
> ext4_filemap_fault+0x36/0x50 [ext4]
> May 05 09:05:42 HOST kernel:  [] 
> __do_fault.isra.61+0x8a/0x100
> May 05 09:05:42 HOST kernel:  [] 
> do_read_fault.isra.63+0x4c/0x1b0
> May 05 09:05:42 HOST kernel:  [] handle_mm_fault+0xa20/0xfb0
> May 05 09:05:42 HOST kernel:  [] __do_page_fault+0x213/0x500
> May 05 09:05:42 HOST kernel:  [] 
> trace_do_page_fault+0x56/0x150
> May 05 09:05:42 HOST kernel:  [] 
> do_async_page_fault+0x22/0xf0
> May 05 09:05:42 HOST kernel:  [] async_page_fault+0x28/0x30
> May 05 09:05:42 HOST kernel: Mem-Info:
> May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 
> isolated_anon:0
> active_file:653 
> inactive_file:412 isolated_file:75
> unevictable:0 dirty:0 
> writeback:0 unstable:0
> slab_reclaimable:120624 
> slab_unreclaimable:14538
> mapped:814755 shmem:816586 
> pagetables:60496 bounce:0
> free:30218 free_pcp:562 
> free_cma:0
>
> Can You tell me how to find problematic query? Or how to "pimp" configuration 
> to let db be alive and let us find problematic query?
>
> --
>
> Pozdrawiam
> Piotr Włodarczyk




Re: OOM Killer kills PostgreSQL

2020-05-20 Thread Stephen Frost
Greetings,

* Piotr Włodarczyk (piotrwlodarczy...@gmail.com) wrote:
> We met unexpected PostgreSQL shutdown. After a little investigation we've
> discovered that problem is in OOM killer which kills our PostgreSQL.

You need to configure your system to not overcommit.

Read up on overcommit_ratio and overcommit_memory Linux settings.

Thanks,

Stephen


signature.asc
Description: PGP signature


Suggestion to improve query performance.

2020-05-20 Thread postgann2020 s
Hi Team,

Thanks for your support.

We are using below environment:

Application :
Programming Language : JAVA
Geoserver

Database Stack:
PostgreSQL : 9.5.15
Postgis

We have 3 geoserver queries and are getting some performance issues after
changing the GeoServer queries.I have posted the queries and explain the
plans of both the old and new queries.

The same type of issues found for 3 queries:
1. Changed index scan to Bitmap scan.
2. All New Queries, again condition checked.



Old Queriy:

>>

Query No:1

1. No issue while executing query.
2. It is feteching: 38 rows only.

===

EXPLAIN ANALYZE SELECT
"underground_route_id","ug_route_sub_type","sw_uid22",encode(ST_AsBinary(ST_Simplify(ST_Force2D("the_geom"),
1.506687768824122E-5, true)),'base64') as "the_geom" FROM
"schema"."underground_route" WHERE  ("the_geom" && ST_GeomFromText('POLYGON
((77.20637798309326 28.627887618687176, 77.20637798309326
28.632784466413323, 77.21195697784424 28.632784466413323, 77.21195697784424
28.627887618687176, 77.20637798309326 28.627887618687176))', 4326) AND
(("ug_route_sub_type" = 'IP1-IRU-Intercity' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = 'IP1-IRU-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" = 'IRU-Intracity'
AND "ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'IRU-Intercity' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'IP1-Own-Intercity' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = 'IP1-Own-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" = 'Own-Intracity'
AND "ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'Own-Intercity' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Own-Intercity-Patch-replacement' AND
"ug_route_sub_type" IS NOT NULL ) OR ("ug_route_sub_type" =
'Own-Intracity-Patch-replacement' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Clamping' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'None' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'On kerb' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Other' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'Suspend' AND "ug_route_sub_type" IS NOT NULL ) OR
("ug_route_sub_type" = 'In Duct Chamber' AND "ug_route_sub_type" IS NOT
NULL ) OR ("ug_route_sub_type" = '' AND "ug_route_sub_type" IS NOT NULL )
OR "ug_route_sub_type" IS NULL  OR ("sw_uid22" = 'Overhead' AND "sw_uid22"
IS NOT NULL  AND "ug_route_sub_type" = 'Own-Intercity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("sw_uid22" = 'Overhead' AND
"sw_uid22" IS NOT NULL  AND "ug_route_sub_type" = 'Own-Intracity' AND
"ug_route_sub_type" IS NOT NULL ) OR ("sw_uid22" = 'Overhead' AND
"sw_uid22" IS NOT NULL  AND "ug_route_sub_type" =
'Own-Intercity-Patch-replacement' AND "ug_route_sub_type" IS NOT NULL ) OR
("sw_uid22" = 'Overhead' AND "sw_uid22" IS NOT NULL  AND
"ug_route_sub_type" = 'Own-Intracity-Patch-replacement' AND
"ug_route_sub_type" IS NOT NULL )));

 Explan Plan:
 

 Index Scan using underground_route_the_geom_geo_idx on underground_route
 (cost=0.41..41.20 rows=7 width=157) (actual time=0.158..1.010 rows=38
loops=1)
   Index Cond: (the_geom &&
'010320E6100100054C354D534022DEBDA03C40004C354D53407BA9AC29FEA13C4000B4904D53407BA9AC29FEA13C4000B49
04D534022DEBDA03C40004C354D534022DEBDA03C40'::geometry)
   Filter: ug_route_sub_type)::text = 'IP1-IRU-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'IP1-IRU-Intracity'::text) AN
D (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'IRU-Intracity'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'IRU-In
tercity'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'IP1-Own-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub
_type)::text = 'IP1-Own-Intracity'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_route_sub_type)::text = 'Own-Intracity'::text) AND
(ug_route_sub_type IS NOT NUL
L)) OR (((ug_route_sub_type)::text = 'Own-Intercity'::text) AND
(ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'Own-Intercity-Patch-replacement'::tex
t) AND (ug_route_sub_type IS NOT NULL)) OR (((ug_route_sub_type)::text =
'Own-Intracity-Patch-replacement'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_route_su
b_type)::text = 'Clamping'::text) AND (ug_route_sub_type IS NOT NULL)) OR
(((ug_route_sub_type)::text = 'None'::text) AND (ug_route_sub_type IS NOT
NULL)) OR (((ug_rout
e_sub_type)::text =