Re: Very slow Query compared to Oracle / SQL - Server

2021-05-06 Thread luis . roberto


- Mensagem original -
> De: "Semen Yefimenko" 
> Para: "pgsql-performance" 
> Enviadas: Quinta-feira, 6 de maio de 2021 11:38:39
> Assunto: Very slow Query compared to Oracle / SQL - Server


> SELECT column1,..., column54 where ((entrytype = 4000 or entrytype = 4001 or
> entrytype = 4002) and (archivestatus <= 1)) order by timestampcol desc;
 

The first thing I would try is rewriting the query to:

SELECT column1,..., column54 
  FROM logtable
 WHERE (entrytype in (4000,4001,4002)) 
   AND (archivestatus <= 1)) 
 ORDER BY timestampcol DESC;

Check if that makes a difference...

Luis R. Weck 




Order of execution

2021-04-27 Thread luis . roberto
Hi!

My question is: is it possible to optimize function order execution?

Here's explanation:

I have a bunch of queries that have volatile quals, some more than one. For 
example:

SELECT *
  FROM clients
 WHERE some_func(client_id)
   AND some_other_func(client_id) 

Now, I know that having volatile function quals is not a good practice, but 
alas, it is what it is.

In this contrived example, some_func filters about 50% of clients, whereas 
some_other_func around 5%.

If PostgreSQL would execute them "serially", some_other_func would only run for 
50% of the clients, cutting execution time. What I've seen is that volatile 
functions execute always.

Is the reason this happen because the function can modify the result from the 
outer query?

Luis R. Weck 




Re: Performance issue when we use policies for Row Level Security along with functions

2020-09-16 Thread luis . roberto



De: "Gopisetty, Ramesh"  
Para: "pgsql-performance"  
Enviadas: Quarta-feira, 16 de setembro de 2020 0:39:08 
Assunto: Performance issue when we use policies for Row Level Security along 
with functions 





BQ_BEGIN

Hi, 

I'm seeing a strange behavior when we implement policies (for RLS - Row level 
security) using functions. 

table test consists of columns testkey,oid,category,type,description... 

Policy 

create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in 
(f_sel_policy_test(testkey)) ); 

Going to a Sequential scan instead of index scan. Hence, performance issue. 

pgwfc01q=> explain analyze select * from test; 
QUERY PLAN 

 
Seq Scan on test (cost=0.00..25713.12 rows=445 width=712) (actual 
time=1849.592..1849.592 rows=0 loops=1) 
Filter: (( testkey )::text = (f_sel_policy_test( testkey ))::text) 
Rows Removed by Filter: 88930 
Planning Time: 0.414 ms 
Execution Time: 1849.614 ms 
(5 rows) 


The function is 

CREATE OR REPLACE FUNCTION vpd_sec_usr.f_sel_policy_test(testkey character 
varying) 
RETURNS character varying 
LANGUAGE plpgsql 
AS $function$ 
Declare 
v_status character varying; 
BEGIN 

if vpd_key = 'COMMON' then 
return ''' COMMON '''; 
elsif vpd_key = (' COMMON_ ' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')) then 
return ''' COMMON_ ' || SYS_CONTEXT('ctx_ng', 'ctx_prod_locale')||; 
elsif vpd_key = SYS_CONTEXT('ctx_ng_vpd', 'ctx_key_fil') then 
return '''co'','''||SYS_CONTEXT('ctx_ng', 'ctx_testkey_fil')||; 
end if; 
return 'false'; 
exception when undefined_object then 
return 'failed'; 
END; 
$function$ 
; 


If i replace the policy with stright forward without function then it chooses 
the index. Not sure how i can implement with the function. 

create policy policy_sel on test FOR SELECT to ram1 USING ( testkey in 
('COMMON',current_setting('ctx_ng'||'.'||'ctx_key_fil'))); 

QUERY PLAN 


 
- 
Bitmap Heap Scan on test (cost=396.66..2966.60 rows=13396 width=712) (actual 
time=0.693..2.318 rows=13159 loops=1) 
Recheck Cond: (( testkey )::text = ANY ((ARRAY['COMMON'::character varying, 
(current_setting('ctx_vpd.ctx_key_fil'::text))::character varying])::text[])) 
Heap Blocks: exact=373 
-> Bitmap Index Scan on test_pkey (cost=0.00..393.31 rows=13396 width=0) 
(actual time=0.653..0.653 rows=13159 l 
oops=1) 
Index Cond: (( testkey )::text = ANY ((ARRAY['COMMON'::character varying, 
(current_setting('ctx_vpd.ctx 
_key_fil'::text))::character varying])::text[])) 
Planning Time: 0.136 ms 
Execution Time: 2.843 ms 
(7 rows) 


If i replace the policy with stright forward without function then it chooses 
the index. Not sure how i can implement with the function. I thought of 
creating the policy with a lot of business logic in the function. If i have the 
function then i notice going for full table scan instead of index. 

Please help me if i miss anything in writing a function or how to use functions 
in the policy. 

Thank you. 


Regards, 
Ramesh G 


BQ_END


You could try seeting the function as immutable. By default it is volatile. 





Re: Postgresql server gets stuck at low load

2020-06-05 Thread luis . roberto




De: "Krzysztof Olszewski"  
Para: pgsql-performance@lists.postgresql.org 
Enviadas: Sexta-feira, 5 de junho de 2020 7:07:02 
Assunto: Postgresql server gets stuck at low load 





BQ_BEGIN

I have problem with one of my Postgres production server. Server works fine 
almost always, but sometimes without any increase of transactions or statements 
amount, machine gets stuck. Cores goes up to 100%, load up to 160%. When it 
happens then there are problems with connect to database and even it will 
succeed, simple queries works several seconds instead of milliseconds.Problem 
sometimes stops after a period a time (e.g. 35 min), sometimes we must restart 
Postgres, Linux, or even KVM (which exists as virtualization host). 
My hardware56 cores (Intel Core Processor (Skylake, IBRS))400 GB RAMRAID10 with 
about 40k IOPS 
Os 
CentOS Linux release 7.7.1908 
kernel 3.10.0-1062.18.1.el7.x86_64 Databasesize 100 GB (entirely fit in memory 
:) )server_version 10.12effective_cache_size 192000 MBmaintenance_work_mem 2048 
MBmax_connections 150 shared_buffers 64000 MBwork_mem 96 MBOn normal state, i 
have about 500 tps, 5% usage of cores, about 3% of load, whole database fits in 
memory, no reads from disk, only writes on about 500 IOPS level, sometimes in 
spikes on 1500 IOPS level, but on this hardware there is no problem with this 
values (no iowaits on cores). In normal state this machine does "nothing". 
Connections to database are created by two app servers based on Java, through 
connection pools, so connections count is limited by configuration of pools and 
max is 120, is lower value than in Postgres configuration (150). On normal 
state there is about 20 connections, when stuck goes into max (120).In 
correlation with stucks i see informations in kernel log aboutNMI watchdog: 
BUG: soft lockup - CPU#25 stuck for 23s! [postmaster:33935]but i don't know 
this is reason or effect of problemI made investigation with pgBadger and ... 
nothing strange happens, just normal statements Any ideas? Thanks, 
Kris 

BQ_END

Hi Krzysztof! 

I would enable pg_stat_statements extension and check if there are long running 
queries that should be quick. 


Random function

2020-03-24 Thread Luis Roberto Weck

Hi,

I am trying to generate some random data using the random() function.

However, I am getting the same result over mulitiple rows. This is a 
sample of the SQL I am using:


select (select string_agg(random()::text,';')
          from pg_catalog.generate_series(1,3,1) )
  from generate_series(1,10,1)

And I am getting something like:

|string_agg |
+--+
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|
|0.243969671428203583;0.69257879443434;0.291524752043187618|

If this is the expected output, is there a way to always generate random 
numbers?