Did you try to transform your temp table into a table partition using the
ALTER TABLE ATTACH syntax
https://www.postgresql.org/docs/10/static/ddl-partitioning.html
Regards
PAscal
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Explain analyse
Output ?
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
What is the value of guc constrain_exclusion ?
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
Did you try
- runing multiple inserts in parallel,
- Stop wal archiving,
- Tune fetch sise ?
Regards
PAscal
--
Sent from:
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html
main ideas are:
- inserting directly to the right partition:
perform as many inserts as pg partitions found in main_table_hist, like
INSERT INTO 14/08/2018_value1 select * from remote_oracle_hist where
day=to_date('14/08/2018','DD/MM/') and value='value1'
please check execution plan (in
Hello,
I have tested it with release 11 and limit 20 is pushed to each partition
when using index on tmstmp.
Could you tell us what is the result of your query applyed to one partition
EXPLAIN ANALYZE
SELECT "a"."id"
FROM a_partition1 "a"
WHERE "a"."parent_id" IN (
34226,24506,40987,27162
Tomas Vondra-4 wrote
> On 11/02/2018 10:36 AM, Richard Lee wrote:
> [...]
>
>> What are my other options to improve the query planning time?
>>
>
> Can you do a bit of profiling, to determine which part of the query
> planning process is slow here?
> [...]
After planning profiling, (or in
Hello,
I have found that explain on tables with many (hundreds) columns
are slow compare to nominal executions.
This can break application performances when using auto_explain or
pg_store_plans.
Here is my test case (with 500 columns, can be pushed to 1000 or 1600)
create table a();
DECLARE
i
Justin Pryzby wrote
> On Mon, Sep 24, 2018 at 12:22:28PM -0700, legrand legrand wrote:
>> Hello,
>> I have found that explain on tables with many (hundreds) columns
>> are slow compare to nominal executions.
>
> See also this thread from last month:
>
> https://w
Hi,
is there an index on
fm_order(session_id,type)
?
regards
PAscal
Hi,
There are many tools:
- (core) extension pg_stat_statements will give you informations of SQL
executions,
- extension pgsentinel https://github.com/pgsentinel/pgsentinel
gives the same results as Oracle ASH view
- java front end PASH viewer https://github.com/dbacvetkov/PASH-Viewer
gives
Hi Didier,
I imagine that this is the sql executed from a trigger.
Could you provide the trigger pl/pgsql code ?
as the source and target tables (anonymized) definition ?
After a fresh db restart, are thoses logs the same for the 6 first
executions and the following ones ?
Regards
PAscal
Did16 wrote
> Hi
>The SQL is not executed from a trigger.
>Here is an extract of my log file :
>>>
>
> 2019-03-01 14:53:37 CET [24803]: [129-1] [3686]
> user=pgbd_preint_sg2,db=pgbd_preint_sg2 LOG: process 24803 still waiting
> for ShareLock on transaction 3711
Hello,
1/ access scheduler_task_executions
by index with device_id = 97
seems ok
2/
I don't understand why
joining
scheduler_task_executions.id=scheduler_operation_executions.task_execution_id
is done using a parallel hash join
when a nested loop would be better (regarding the number
> Thanks for the suggestion. Yes I could change the sql and when using only
> one filter for int_otherid2 it does use all 3 columns as the index key.
explain (analyze,buffers) SELECT
uuid_id,int_id,timestamp_date,int_otherid,float_value,int_otherid2,int_otherid3,int_rowver
FROM tabledata WHERE
Hello,
I'm not able to use your perfs diagrams,
but it seems to me that not using 3rd column of that index (int_otherid2)
generates an IO problem.
Could you give us the result of
explain (analyze,buffers) SELECT
16 matches
Mail list logo