Re: Terribly slow query with very good plan?

2022-02-04 Thread Thomas Kellerer
Les schrieb am 04.02.2022 um 10:11:

> My task is to write a query that tells if a folder has any active file inside 
> it - directly or in subfolders. Here is the query for that:
>
> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS)
>
> select id, title,
> (exists (select f2.id  from media.oo_file f2 where f2.relpath 
> like f.relpath || '%')) as has_file
> from media.oo_folder f where f.parent_id is null
>
> QUERY PLAN
>   
>   |
> --+
> Index Scan using oo_folder_idx_parent on media.oo_folder f  
> (cost=0.42..488.02 rows=20 width=26) (actual time=713.419..25414.969 rows=45 
> loops=1) |
>   Output: f.id , f.title, (SubPlan 1)
>   
> |
>   Index Cond: (f.parent_id IS NULL)   
>   
>   |
>   Buffers: shared hit=7014170 
>   
>   |
>   SubPlan 1   
>   
>   |
> ->  Index Only Scan using oo_file_idx_relpath on media.oo_file f2  
> (cost=0.55..108499.27 rows=5381 width=0) (actual time=564.756..564.756 rows=0 
> loops=45)|
>   Filter: (f2.relpath ~~ (f.relpath || '%'::text))
>   
>   |
>   Rows Removed by Filter: 792025  
>   
>   |
>   Heap Fetches: 768960
>   
>   |
>   Buffers: shared hit=7014130 
>   
>   |

In addition to the collation tweaks, I wonder if using a lateral join might 
result in a more efficient plan:

select id, title, c.id is not null as has_path
from media.oo_folder f
  left join lateral (
select f2.id
from media.oo_file f2
where f2.relpath like f.relpath || '%'
limit 1
  ) c on true
where f.parent_id is null





Query runs slower as prepared statement - identical execution plans

2022-01-25 Thread Thomas Kellerer
Hello,

I have a strange case of a query that runs substantially slower when run as a
Java PreparedStatement with placeholders, compared to using constant values in
the SQL string.

In my experience, the reason for this is usually a different execution plan for 
the
prepared statement.

However in this case, the plans are identical but the prepared statements runs 
substantially
slower than the "non-prepared" plan: 1800ms to 2000ms vs. 250ms to 350ms

I can't disclose the query, but the basic structure is this:

   select ...
   from some_table
   where jsonb_column #>> $1 = ANY ($2)
 and some_uuid_column = ANY (.)

For various reasons the list of values for the some_uuid_column = ANY(..) 
condition
is always passed as constant values.

The plan is quite reasonable using a Bitmap Heap Scan in both cases on 
"some_uuid_column"

I uploaded the (anonymized) plans to explain.depesz:

Fast execution: https://explain.depesz.com/s/QyFR
Slow execution: https://explain.depesz.com/s/mcQz

The "prepared" plan was created using psql, not through JDBC:
PREPARE p1(text,text) AS ...

EXPLAIN (analyze, buffers, timing, verbose)
EXECUTE p1 ('{...}', '{}')


But the runtime is pretty much what I see when doing this through Java.

My question is: why is processing the query through a prepared statement so 
much slower?

This happens on a test system running Postgres 13.2 on CentOS, and another test 
system
running 13.5 on Ubuntu.

For the time being, we can switch off the use of a PreparedStatement, but I'm 
also
interesting to know the underlying root cause.

Any ideas?




Re: Out of memory error

2021-11-24 Thread Thomas Kellerer
aditya desai schrieb am 24.11.2021 um 08:31:
> H Michael,
> Please see insert_info function below. Also r_log_message is composite data 
> type and it's definition is also given below.
>
> CREATE OR REPLACE FUNCTION insert_info(
>     info_array  r_log_message[]
> ) RETURNS varchar AS $$
>     DECLARE
>         info_element  r_log_message;
>     BEGIN
>         FOREACH info_element IN ARRAY info_array
>         LOOP
>             INSERT INTO testaditya(
>                 columname,
>                 oldvalue,
>                 newvalue
>             ) VALUES(
>                 info_element.column_name,
>                 info_element.oldvalue,
>                 info_element.newvalue
>             );
>         END LOOP;
>         RETURN 'OK';
>     END;
> $$ LANGUAGE plpgsql;

You don't need a loop for that. This can be done more efficiently using unnest()


INSERT INTO testaditya(columname,oldvalue,newvalue)
select u.*
from unnest(info_array) as u;







Re: Out of memory error

2021-11-24 Thread Thomas Kellerer
aditya desai schrieb am 24.11.2021 um 08:35:
> Hi Thomas,
> v_message is of composite data type r_log_message and it's definition is as 
> shown below.
>
> postgres=# \d r_log_message;
>                  Composite type "public.r_log_message"
>    Column    |          Type           | Collation | Nullable | Default
> -+-+---+--+-
>  column_name | character varying(30)   |           |          |
>  oldvalue    | character varying(4000) |           |          |
>  newvalue    | character varying(4000) |           |          |
>
> Regards,
> Aditya.

Sorry, didn't see that.

Then you need to create records of that type in the array:

   v_message := array[('empName', old.empname, new.empname)::r_log_message, 
('location', old.location, new.location)::r_log_message];

or an array of that type:

   v_message := array[('empName', old.empname, new.empname), ('location', 
old.location, new.location)]::r_log_message[];


Btw: why don't you use `text` instead of varchar(4000).




Re: Out of memory error

2021-11-23 Thread Thomas Kellerer
aditya desai schrieb am 24.11.2021 um 07:25:
> Thanks Tom. However I could not find any solution to achieve the given 
> requirement. I have to take all values in the temp table and assign it to an 
> array variable to pass it to the audit procedure as shown below. Can you 
> please advise ? 
>
> CREATE OR REPLACE FUNCTION call_insert_info(
>     
> ) RETURNS void AS $$
>     DECLARE
>         v_message r_log_message[];
> OLDVALUE1 varchar(4000);
>     BEGIN
>             drop table if exists changedinfo
>     create temp table changedinfo(colName varchar(100), oldValue 
> varchar(4000), newValue varchar(4000));
>             insert into changed infot select 'empName', OLD.empName, 
> NEW.empName from employee;
>             insert into changed infot select 'location', OLD.location, 
> NEW.location from employee;
>             
>         
> v_message:=   array(select '(' || columname || ',' || oldvalue || ',' || 
> newvalue ||')' from changedinfo);
>         perform insert_info(v_message);
>         raise notice '%',v_message;
>     END;
> $$ LANGUAGE plpgsql;


You don't need a temp table for that. You can create the array directly from 
the new and old records:

v_message := array[concat_ws(',', 'empName', old.empname, new.empname), 
concat_ws(',', 'location', old.location, new.location)];

Although nowadays I would probably pass such an "structure" as JSON though, not 
as a comma separated list.





Re: Better, consistent instrumentation for postgreSQL using a similar API as Oracle

2021-10-08 Thread Thomas Kellerer

Bruce Momjian schrieb am 08.10.2021 um 17:40:

I guess everyone will use that information in a different way.

We typically use the AWR reports as a post-mortem analysis tool if
something goes wrong in our application (=customer specific projects)

E.g. if there was a slowdown "last monday" or "saving something took minutes 
yesterday morning",
then we usually request an AWR report from the time span in question. Quite 
frequently
this already reveals the culprit. If not, we ask them to poke in more detail 
into v$session_history.

So in our case it's not really used for active monitoring, but for
finding the root cause after the fact.

I don't know how representative this usage is though.


OK, that's a good usecase, and something that certainly would apply to
Postgres.  Don't you often need more than just wait events to find the
cause, like system memory usage, total I/O, etc?


Yes, the AWR report contains that information as well. e.g. sorts that spilled
to disk, shared memory at the start and end, top 10 statements sorted by
total time, individual time, I/O, number of executions, segments (tables)
that received the highest I/O (read and write) and so on.
It's really huge.






Re: Better, consistent instrumentation for postgreSQL using a similar API as Oracle

2021-10-08 Thread Thomas Kellerer

Bruce Momjian schrieb am 08.10.2021 um 17:21:

However, I also need to ask how the wait event information, whether
tracing or sampling, can be useful for Postgres because that will drive
the solution.


I guess everyone will use that information in a different way.

We typically use the AWR reports as a post-mortem analysis tool if
something goes wrong in our application (=customer specific projects)

E.g. if there was a slowdown "last monday" or "saving something took minutes 
yesterday morning",
then we usually request an AWR report from the time span in question. Quite 
frequently
this already reveals the culprit. If not, we ask them to poke in more detail 
into v$session_history.

So in our case it's not really used for active monitoring, but for
finding the root cause after the fact.

I don't know how representative this usage is though.

Thomas





Re: Performance benchmark of PG

2021-07-19 Thread Thomas Kellerer
Manish Lad schrieb am 19.07.2021 um 12:09:
> We are planning to migrate Oracle exadata database to postgresql and
> db size ranges from 1 tb to 60 TB.
>
> Will the PG support this with the performance matching to that of
> exadata applince? If anyone could point me in the right direction
> where i xan get the benchmarking done for these two databases either
> on prime or any cloud would be great.


As already pointed out, you won't find such a benchmark.

You will have to run such a benchmark yourself. Ideally with a workload
that represents your use case. Or maybe with something like HammerDB.

But Exadata isn't only software, it's also hardware especially designed
to work together with Oracle's enterprise edition.

So if you want to get any reasonable results, you will at least have to
buy hardware that matches the Exadata HW specifications.

So if you run your own tests, make sure you buy comparable HW for
Postgres as well (lots of RAM and many fast server grade NVMes)





Re: Partition column should be part of PK

2021-07-12 Thread Thomas Kellerer


David Rowley schrieb am 12.07.2021 um 02:57:
> Generally, there's not all that much consensus in the community that
> this would be a good feature to have.  Why do people want to use
> partitioning?  Many people do it so that they can quickly remove data
> that's no longer required with a simple DETACH operation.  This is
> metadata only and is generally very fast.  Another set of people
> partition as their tables are very large and they become much easier
> to manage when broken down into parts.  There's also a group of people
> who do it for the improved data locality.   Unfortunately, if we had a
> global index feature then that requires building a single index over
> all partitions.  DETACH is no longer a metadata-only operation as we
> must somehow invalidate or remove tuples that belong to the detached
> partition. The group of people who partitioned to get away from very
> large tables now have a very large index.  Maybe the only group to get
> off lightly here are the data locality group. They'll still have the
> same data locality on the heap.
>
> So in short, many of the benefits of partitioning disappear when you
> have a global index.

The situations where this is useful are large tables where partitioning
would turn Seq Scans of the whole table into Seq Scans of a partition,
or where it would allow for partition wise joins and still have
foreign keys referencing the partitioned table.

I agree they do have downsides. I only know Oracle as one of those systems
where this is possible, and in general global indexes are somewhat
avoided but there are still situations where they are useful.
E.g. if you want to have foreign keys referencing your partitioned
table and including the partition key in the primary key makes no
sense.

Even though they have disadvantages, I think it would be nice to
have the option to create them.

I know that in the Oracle world, they are used seldomly (precisely
because of the disadvantages you mentioned) but they do have a place.

Thomas




Re: Potential performance issues

2021-03-01 Thread Thomas Kellerer
Jung, Jinho schrieb am 28.02.2021 um 16:04:
> # Performance issues discovered from differential test
>
> For example, the below query runs x1000 slower than other DBMSs from 
> PostgreSQL.
>
>     select ref_0.ol_amount as c0
>     from order_line as ref_0
>         left join stock as ref_1
>           on (ref_0.ol_o_id = ref_1.s_w_id )
>         inner join warehouse as ref_2
>         on (ref_1.s_dist_09 is NULL)
>     where ref_2.w_tax is NULL;

I find this query extremely weird to be honest.

There is no join condition between warehouse and the other two tables which 
results in a cross join.
Which is "reduced" somehow by applying the IS NULL conditions - but still, to 
me this makes no sense.

Maybe the Postgres optimizer doesn't handle this ugly "join condition" the same 
way the others do.

I would rather expect a NOT EXISTS against the warehouse table.

Thomas




Re: Query performance issue

2020-09-04 Thread Thomas Kellerer

Nagaraj Raj schrieb am 04.09.2020 um 23:18:

I have a query which will more often run on DB and very slow and it
is doing 'seqscan'. I was trying to optimize it by adding indexes in
different ways but nothing helps.

EXPALIN ANALYZE select serial_no,receivingplant,sku,r3_eventtime
from (select serial_no,receivingplant,sku,eventtime as r3_eventtime, 
row_number() over (partition by serial_no order by eventtime desc) as mpos
from receiving_item_delivered_received
where eventtype='LineItemdetailsReceived'
and replenishmenttype = 'DC2SWARRANTY'
and coalesce(serial_no,'') <> ''
) Rec where mpos = 1;


Query Planner:

"Subquery Scan on rec  (cost=70835.30..82275.49 rows=1760 width=39) (actual 
time=2322.999..3451.783 rows=333451 loops=1)"
"  Filter: (rec.mpos = 1)"
"  Rows Removed by Filter: 19900"
"  ->  WindowAgg  (cost=70835.30..77875.42 rows=352006 width=47) (actual 
time=2322.997..3414.384 rows=353351 loops=1)"
"        ->  Sort  (cost=70835.30..71715.31 rows=352006 width=39) (actual 
time=2322.983..3190.090 rows=353351 loops=1)"
"              Sort Key: receiving_item_delivered_received.serial_no, 
receiving_item_delivered_received.eventtime DESC"
"              Sort Method: external merge  Disk: 17424kB"
"              ->  Seq Scan on receiving_item_delivered_received  
(cost=0.00..28777.82 rows=352006 width=39) (actual time=0.011..184.677 rows=353351 
loops=1)"
"                    Filter: (((COALESCE(serial_no, ''::character varying))::text <> 
''::text) AND ((eventtype)::text = 'LineItemdetailsReceived'::text) AND 
((replenishmenttype)::text = 'DC2SWARRANTY'::text))"
"                    Rows Removed by Filter: 55953"
"Planning Time: 0.197 ms"
"Execution Time: 3466.985 ms"


The query retrieves nearly all rows from the table 353351 of 409304 and the Seq 
Scan takes less than 200ms, so that's not your bottleneck.
Adding indexes won't change that.

The majority of the time is spent in the sort step which is done on disk.
Try to increase work_mem until the "external merge" disappears and is done in 
memory.

Thomas




Re: PostgreSQL does not choose my indexes well

2020-04-23 Thread Thomas Kellerer
> CREATE INDEX idx_tabla_entidad
>     ON public.entidad USING btree
>     (cod_tabla ASC NULLS LAST);
>
> CREATE INDEX idx_entidad_tabla_4
>     ON public.entidad USING btree
>     (cod_entidad_tabla ASC NULLS LAST)
>     INCLUDE(cod_entidad, cod_tabla, cod_entidad_tabla)
>     WHERE cod_tabla::bigint = 4;
>
>
> SELECT count(*) from entidad;
> 34.413.354
>
> SELECT count(*) from entidad where cod_tabla = 4;
> 1.409.985
>
>
> explain (analyze, buffers, format text) select * from entidad where cod_tabla 
> = 4
> Index Scan using idx_tabla_entidad on entidad (cost=0.56..51121.41 
> rows=1405216 width=20) (actual time=0.037..242.609 rows=1409985 loops=1)
>   Index Cond: ((cod_tabla)::bigint = 4)
>   Buffers: shared hit=12839
> Planning Time: 0.158 ms
> Execution Time: 311.828 ms
>
>
> Why postgresql doesnt use the index idx_entidad_tabla_4?

Because that index does not contain the column from the WHERE clause as an 
"indexed" column (only as an included column).
Plus: scanning idx_tabla_entidad is more efficient because that index is 
smaller.

What do you think that idx_entidad_tabla_4 would be the better choice?

Thomas





Re: Postgres not using index on views

2020-04-07 Thread Thomas Kellerer
> RV>> It simply is the way the application stores the data. For Oracle
> we are storing in XML and JSON format, for postgres, due do
> limitations of XML api, we are storing in VARCHAR.

Why not use JSON in Postgres then?
Postgres' JSON functions are at least as powerful as Oracle's  (if not better 
in a lot of areas).

Would be interesting to see what XML function/feature from Oracle you can't 
replicate/migrate to Postgres.

Another option might be to upgrade to Postgres 12 and define those columns as 
generated columns as part of the table, rather than a view.
Then you only pay the performance penalty of the extracValueJS() function when 
you update the table, not for every select.

Thomas




Re: Postgres not using index on views

2020-04-07 Thread Thomas Kellerer
Rick Vincent schrieb am 07.04.2020 um 11:08:
> The function is defined as below, so no use of VOLATILE.

If you don't specify anything, the default is VOLATILE.

So your function *is* volatile.
 
> CREATE OR REPLACE FUNCTION extractValueJS (sVar text, nfm INTEGER, nvm 
> INTEGER)
> RETURNS VARCHAR as $$
> declare
> sRet text := '';
> nSize int := 0;
> retVal int := 0;
> cVar text[] := regexp_split_to_array(sVar,'');
> idx int := 1;
> nStart int := 0;
> nEnd int := 0;
> begin
> etc...
>     return sRet;
> end;
> $$ LANGUAGE plpgsql;

You haven't shown us your actual code, but if you can turn that into a 
"language sql" function (defined as immutable, or at least stable), I would 
expect it to be way more efficient.

Thomas




Re: JOIN on partitions is very slow

2020-03-23 Thread Thomas Kellerer

Michael Lewis schrieb am 23.03.2020 um 17:16:

Yes. I can tweak the query. Version of postgres is 9.5.15. I have
about 20 partitions for company_sale_account table. I do have an
index on company name.

I need to use DISTINCT as i need to remove the duplicates.


DISTINCT is a sign of improper joins most of the time in my
experience. Often, just changing to group by is faster


As none of the columns of the joined table are used, most probably
this should be re-written as an EXISTS condition.
Then neither GROUP BY nor DISTINCT is needed.







Re: Bad query plan when you add many OR conditions

2020-01-14 Thread Thomas Kellerer
Marco Colli schrieb am 10.01.2020 um 02:11:
> I have a query on a large table that is very fast (0s):
> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-fast_query-txt
> 
> Basically the query matches the rows that have a tag1 OR tag2 OR tag3 OR tag4 
> OR tag5... 
> 
> However if you increase the number of OR at some point PostgreSQL makes the 
> bad decision to change its query plan! And the new plan makes the query 
> terribly slow:
> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-slow_query-txt
> 
> Instead of this (which is fast):
>   Bitmap Index Scan on index_subscriptions_on_project_id_and_tags
> It starts using this (which is slow):
>   Parallel Index Scan using index_subscriptions_on_project_id_and_created_at
> The choice seems quite stupid since it doesn't have the tags on the new 
> index... and indeed the query takes about 1 minute instead of a few 
> milliseconds. Here's a list of the available indexes:
> https://gist.github.com/collimarco/039412b4fe0dcf39955888f96eff29db#file-_indexes-txt
> 
> How can I encourage PostgreSQL to use the Bitmap Index Scan even when there 
> are many OR conditions? I have tried with VACUUM ANALYZE subscriptions but it 
> doesn't help.
> 
> Note: the query is generated dynamically by customers of a SaaS, so I don't 
> have full control on it

Can you replace the many ORs with a single "overlaps" comparison?

This 

(tags @> ARRAY['crt:2018_04']::varchar[]) OR (tags @> 
ARRAY['crt:2018_05']::varchar[]) OR (tags @> ARRAY['crt:2018_06']::varchar[])

is equivalent to 

tags && array['crt:2018_04','crt:2018_05','crt:2018_06', ...]

The && operator can make use of a GIN index so maybe that uses the 
index_subscriptions_on_project_id_and_tags regardless of the number of elements.



 






Re: Considerable performance downgrade of v11 and 12 on Windows

2019-11-29 Thread Thomas Kellerer
Eugene Podshivalov schrieb am 29.11.2019 um 11:04:
> Imported ways data from a file and added a primary key.
> 
> SET synchronous_commit TO OFF;
> COPY ways FROM 'E:\ways.txt';

> ...
> COPY ways FROM PROGRAM 'cmd /c "type E:\ways.txt"';

Those two commands are not doing the same thing - the piping through the TYPE 
command is most probably eating all the performance






Surprising benchmark count(1) vs. count(*)

2019-09-19 Thread Thomas Kellerer
https://blog.jooq.org/2019/09/19/whats-faster-count-or-count1/

Is there a reason why count(*) seems to be faster? 





Re: Strange runtime partition pruning behaviour with 11.4

2019-08-05 Thread Thomas Kellerer
Tom Lane schrieb am 03.08.2019 um 18:05:
> Yeah, I get the same plan with or without ANALYZE, too.  In this example,
> having the ANALYZE stats barely moves the rowcount estimates for
> foo_bar_baz at all, so it's not surprising that the plan doesn't change.
> (I do wonder how Thomas got a different outcome...)

I don't know why either ;) 

I am using a JDBC based SQL tool to run that - I don't know if that matters.

I just tried this script with Postgres 12 beta2 and there I do not get 
the initial plan with "never executed" (so the same behaviour as everybody
else seems to have).

If the reason why my initial plan is different than the "analyzed" plan 
lies in the configuration, I am happy to share my postgresql.conf if 
that is of any interest.

Thomas






Re: Strange runtime partition pruning behaviour with 11.4

2019-08-03 Thread Thomas Kellerer

it's posible to rewrite the query to:


test=# explain analyse select count(*) from foo_bar_baz as fbb where foo_id = 
(select foo_id from foo where foo_name = 'eeny');

I know, that's not a solution, but a workaround. :-(


Yes, I discovered that as well.

But I'm more confused (or concerned) by the fact that the (original) query 
works correctly *without* statistics.

Thomas







Strange runtime partition pruning behaviour with 11.4

2019-08-02 Thread Thomas Kellerer
I stumbled across this question on SO: 
https://stackoverflow.com/questions/56517852

Disregarding the part about Postgres 9.3, the example for Postgres 11 looks a 
bit confusing. 

There is a script to setup test data in that question: 

 start of script 

create table foo (
foo_id integer not null,
foo_name varchar(10),
constraint foo_pkey primary key (foo_id) 
);  

insert into foo
  (foo_id, foo_name) 
values
  (1, 'eeny'),
  (2, 'meeny'),
  (3, 'miny'),
  (4, 'moe'),
  (5, 'tiger'), 
  (6, 'toe');

create table foo_bar_baz (
foo_id integer not null,
bar_id integer not null,
bazinteger not null,
constraint foo_bar_baz_pkey primary key (foo_id, bar_id, baz),
constraint foo_bar_baz_fkey1 foreign key (foo_id)
references foo (foo_id)
) partition by range (foo_id) 
;

create table if not exists foo_bar_baz_0 partition of foo_bar_baz for 
values from (0) to (1);
create table if not exists foo_bar_baz_1 partition of foo_bar_baz for 
values from (1) to (2);
create table if not exists foo_bar_baz_2 partition of foo_bar_baz for 
values from (2) to (3);
create table if not exists foo_bar_baz_3 partition of foo_bar_baz for 
values from (3) to (4);
create table if not exists foo_bar_baz_4 partition of foo_bar_baz for 
values from (4) to (5);
create table if not exists foo_bar_baz_5 partition of foo_bar_baz for 
values from (5) to (6);

with foos_and_bars as (
select ((random() * 4) + 1)::int as foo_id, bar_id::int
from generate_series(0, 1499) as t(bar_id)
), bazzes as (
select baz::int
from generate_series(1, 1500) as t(baz)
)
insert into foo_bar_baz (foo_id, bar_id, baz) 
select foo_id, bar_id, baz 
from bazzes as bz 
  join foos_and_bars as fab on mod(bz.baz, fab.foo_id) = 0;

 end of script 

I see the some strange behaviour similar to to what is reported in the comments 
to that question: 

When I run the test query immediately after populating the tables with the 
sample data:

explain analyze 
select count(*) 
from foo_bar_baz as fbb 
  join foo on fbb.foo_id = foo.foo_id 
where foo.foo_name = 'eeny'

I do see an "Index Only Scan  (never executed)" in the plan for the 
irrelevant partitions: 

  https://explain.depesz.com/s/AqlE

However once I run "analyze foo_bar_baz" (or "vacuum analyze"), Postgres 
chooses to do a "Parallel Seq Scan" for each partition:

  https://explain.depesz.com/s/WwxE

Why does updating the statistics mess up (runtime) partition pruning? 


I played around with random_page_cost and that didn't change anything. 
I tried to create extended statistics on "foo(id, name)" so that the planner 
would no, that there is only one name per id. No change. 

I saw the above behaviour when running this on Windows 10 (my Laptop) or CentOS 
7 (a test environment on a VM) 

On the CentOS server default_statistics_target is set to 100, on my laptop it 
is set to 1000

In both cases the Postgres version was 11.4

Any ideas? 

Thomas




Re: Pg10 : Client Configuration for Parallelism ?

2019-04-19 Thread Thomas Kellerer

laurent.decha...@orange.com schrieb am 17.04.2019 um 16:33:

Hello Justin and thank you for your clues.

Finally I found that putting blank to the option that limits the
number of rows to retrieve (which is normal for this kind of tool)
allows PostgreSQL to parallelize the query.

On jdbc it seems this is equivalent to write :
statement. setMaxRows(0);  // parallelism authorized, which is the default.

Thus on my jdbc basic program if I add :
statement. setMaxRows(100);  // No parallelism allowed (at least in Pg10)

Thanks to all who were kind enough to help.


This isn't limited to Statement.setMaxRows()

If you use "LIMIT x" in your SQL query, the same thing happens.

Thomas





Re: Pg10 : Client Configuration for Parallelism ?

2019-04-17 Thread Thomas Kellerer
laurent.decha...@orange.com schrieb am 17.04.2019 um 08:30:
> I am working on PostgreSQL 10.5 and I have a discrepancy between clients 
> regarding parallelism feature.
> 
> For a simple query (say a simple SELECT COUNT(*) FROM BIG_TABLE), I
> can see PostgreSQL use parallelism when the query is launched from
> psql or PgAdmin4. However the same query launched with DBeaver (ie
> connected through JDBC) does not use parallelism.
> 
> SELECT current_setting('max_parallel_workers_per_gather') gives 10
> from my session.
> 
> Is there a client configuration that prevents from using parallelism?

Maybe DBeaver wraps the statement for some reason? (I have seen SQL clients do 
that)
A CTE would prevent parallelism. 





Bloom index cost model seems to be wrong

2019-02-12 Thread Thomas Kellerer
I stumbled upon this question:

https://dba.stackexchange.com/questions/229413

in a nutshell: the bloom index is not used with the example from the manual. 

The bloom index is only used if either Seq Scan is disabled or if the 
random_page_cost is set to 1 (anything about 1 triggers a Seq Scan on my 
Windows laptop). 

If parallel execution is disabled, then the bloom index is only used if the 
random_page_cost is lower than 4. 

This does not use the index:

  set random_page_cost = 4; 
  set max_parallel_workers_per_gather=0;
  explain (analyze, buffers) 
  select * 
  from tbloom 
  where i2 = 898732 
and i5 = 123451;

This uses the bloom index:

  set random_page_cost = 3.5; 
  set max_parallel_workers_per_gather=0;
  explain (analyze, buffers) 
  select * 
  from tbloom 
  where i2 = 898732 
and i5 = 123451;

And this uses the index also: 

  set random_page_cost = 1; 
  explain (analyze, buffers) 
  select * 
  from tbloom 
  where i2 = 898732 
and i5 = 123451;

This is the plan with when the index is used (either through "enable_seqscan = 
off" or "random_page_cost = 1")

Bitmap Heap Scan on tbloom  (cost=138436.69..138440.70 rows=1 width=24) (actual 
time=42.444..42.444 rows=0 loops=1)  
  Recheck Cond: ((i2 = 898732) AND (i5 = 123451))   
 
  Rows Removed by Index Recheck: 2400   
 
  Heap Blocks: exact=2365   
 
  Buffers: shared hit=21973 
 
  ->  Bitmap Index Scan on bloomidx  (cost=0.00..138436.69 rows=1 width=0) 
(actual time=40.756..40.756 rows=2400 loops=1)
Index Cond: ((i2 = 898732) AND (i5 = 123451))   
 
Buffers: shared hit=19608   
 
Planning Time: 0.075 ms 
 
Execution Time: 42.531 ms   
 

And this is the plan when everything left at default settings:

Seq Scan on tbloom  (cost=0.00..133695.80 rows=1 width=24) (actual 
time=1220.116..1220.116 rows=0 loops=1)
  Filter: ((i2 = 898732) AND (i5 = 123451)) 
  
  Rows Removed by Filter: 1000  
  
  Buffers: shared hit=4697 read=58998   
  
  I/O Timings: read=354.670 
  
Planning Time: 0.075 ms 
  
Execution Time: 1220.144 ms 
  

Can this be considered a bug in the cost model of the bloom index 
implementation? 
Or is it expected that this is only used if random access is really cheap? 

Thomas





Re: PostgreSQL VS MongoDB: a use case comparison

2018-11-20 Thread Thomas Kellerer
Stephen Frost schrieb am 20.11.2018 um 18:28:
> Oh yes, having a dictionary would be a great start to reducing the size
> of the jsonb data, though it could then become a contention point if
> there's a lot of new values being inserted and such.  Naturally there
> would also be a cost to pulling that data back out as well but likely it
> would be well worth the benefit of not having to store the field names
> repeatedly.

There is an extension for a dictionary based JSONB compression:

https://github.com/postgrespro/zson




Re: Should from_collapse be switched off? (queries 10 times faster)

2018-03-23 Thread Thomas Kellerer
Peter schrieb am 23.03.2018 um 11:03:
> My queries get up to 10 times faster when I disable from_collapse
> (setting from_collapse_limit=1).
> 
> After this finding, The pramatic solution is easy: it needs to be
> switched off.

You should post some example queries together with the slow and fast plans. 
Ideally generated using "explain(analyze, buffers)" instead of a simple 
"explain" to see details on the execution 

Thomas





Re: Performance impact of lowering max_files_per_process

2018-01-23 Thread Thomas Kellerer
Thomas Kellerer schrieb am 19.01.2018 um 17:48:
>
> I wonder what performance implications that has on a server with
> around 50-100 active connections (through pgBouncer).
> 
> My understanding of the documentation is, that Postgres will work
> just fine if we lower the limit, it simply releases the cached file
> handles if the limit is reached. But I have no idea how expensive
> opening a file handle is in Linux.
> 
> So assuming the sessions (and thus the queries) actually do need that
> many file handles, what kind of performance impact (if any) is to be
> expected by lowering that value for Postgres to e.g. 500?

I would be really interested in an answer. 





Re: Query is slow when run for first time; subsequent execution is fast

2018-01-16 Thread Thomas Kellerer
Nandakumar M schrieb am 12.01.2018 um 09:03:
> Even if I close a connection and open a new one and execute the same
> query, the planning time is considerably less than the first time.
> Only when I restart the Postgres server then I face high planning
> time again.

Yes, because the data is cached by Postgres ("shared_buffers") and the 
filesystem.