tivities. That's all there was
time for with PG10. This is highly likely to be improved sometime in
the future.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-p
her columns"
So, if you were to write the query as:
explain analyze select col, k from test_tbl group by col, k order by k
limit 1000;
It should run much more quickly, although still not as optimal as it could be.
[1] https://www.postgresql.org/docs/9.6/static/release-9-6.html
--
Davi
ed version of PostgreSQL?
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On 4 October 2017 at 08:48, Ben Nachtrieb wrote:
> I have 2 cores and my max_parallel_workers_per_gather = 2 and
> max_worker_processes = 8, but my load averages are between 8 and 5 with
> scheduled at 1/189 to 5/195. Are these so high because I increased
> max_parallel_workers_per_gather? My unde
ases, then the only variable part is validation_status_code. Perhaps
validation_status_code = 'P' is much more common in QA than in
production.
You can look at the stats as gathered by ANALYZE with:
\x on
select * from pg_stats where tablename = 'event__' and at
that we could have seen if those are accurate or not.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
re no non-DISTINCT output columns, so we needn't check. Note that
* subquery_is_pushdown_safe already reported that we can't use volatile
* quals if there's DISTINCT or DISTINCT ON.)
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Suppo
eleased>=to_timestamp(0)
ORDER BY released desc
LIMIT 1) b;
or you could write a function which just runs that query. Although,
with the above or the function method, if you give this enough
authors, then it'll eventually become slower than the problem query.
Perhaps if you kno
S ; 100 is the default, and 10000
is the maximum.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
htt
other random_page_cost defined on it which is causing them
not to ever be preferred.
* you've actually got indexes
Also, you might like to try to EXPLAIN DELETE FROM wos_2017_1.article
WHERE ut = ''; to see if the planner makes use of the
index for that. If that's not choosing the
gresql.org/docs/9.6/static/pgbench.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t; (cost=0.56..199919.49 rows=7161253 width=15) (actual time=0.016..1625.031
> rows=7160921 loops=1)
>Index Cond: (es09codemp = 1)
>Heap Fetches: 51499
>Buffers: shared hit=75342
> Planning time: 50.129 ms
> Execution time: 380419.435 ms
>
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
e nice if you could
test the patch on the current git head to see if this fixes your
problem. The findings would be quite interesting for me. Please note
this patch is for test environments only at this stage, not for
production use.
--
David Rowley http://www.2ndQuadrant.com/
as more aiming for some
agreement for generic infrastructure to do exactly as you describe.
[1] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
-
ut at the same time understand the annoying poor optimisation in this
area.
Although please remember the patch I proposed was merely a first draft
proposal. Not for production use.
[1]
http://www.postgresql.org/message-id/flat/cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com#cakj
be equal, and since pc.oid = 1, then pa.attrelid
must also be 1.
If we have 2 items in the IN() clause, then this no longer happens:
# explain select * from pg_class pc inner join pg_attribute pa on pc.oid =
pa.attrelid where pc.oid in(1,2);
QUERY PLAN
Nested Loop (cost=8.84..54.84 rows=15 width=422)
-> Bitmap Heap Scan on pg_class pc (cost=8.56..14.03 rows=2 width=223)
Recheck Cond: (oid = ANY ('{1,2}'::oid[]))
-> Bitmap Index Scan on pg_class_oid_index (cost=0.00..8.56
rows=2 width=0)
Index Cond: (oid = ANY ('{1,2}'::oid[]))
-> Index Scan using pg_attribute_relid_attnum_index on pg_attribute pa
(cost=0.28..20.33 rows=8 width=203)
Index Cond: (attrelid = pc.oid)
(7 rows)
In your case the claim_id = 'e8a38718-7997-4304-bbfa-138deb84aa82'::uuid
was pushed down into the subqueries, thus giving them less work to do, and
also the flexibility of using indexes on claim_id in the tables contained
within the subqueries. PostgreSQL currently does not push any inequality
predicates down at all.
A few months ago I did a little bit of work to try and lift this
restriction, although I only made it cover the >=, >, < and <= operators as
a first measure.
Details here:
http://www.postgresql.org/message-id/flat/cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com
If you didn't have the VIEW, you could manually push these predicates into
each subquery. However this is not really possible to do with the VIEW.
Perhaps something could be done with a function and using dynamic SQL to
craft a query manually, or you could just get rid of the view and have the
application build the query. If that's not an option then maybe you could
response to the thread above to mention that you've been hit by this
problem and would +1 some solution to fix it, and perhaps cross link to
this thread. I did have a little bit of a hard time in convincing people
that this was in fact a fairly common problem in the above thread, so it
would be nice to see people who have hit this problem respond to that.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ion tree representing runtime application
* of type-input conversion functions. (As of PG 7.3
* that is actually possible, but not clear that we want
* to rely on it.)
*
* The token location is attached to the run-time
* typecast, not to the Const, for the convenience of
* pg_stat_statements (w
rly. I ran
> everything again and there's a marked difference between 9.3 and 9.4.
> The parallel copy times still inflate, but only from 1.4s to 2.5s at 4
> procs. Though it gets a bit dicey after that.
>
>
>
Do the times still inflate in the same way if you perform the COPY b
Buffers: shared hit=2
> Total runtime: 18.528 ms
> (35 rows)
>
>
> Tomorrow I will try to do the same with the other slow query, reporting
> here.
>
>
> It will be interesting to see how Oracle and SQL-Server perform with the
> re-written query too.
> Thanks.
>
>
Glad that's looking better for you.
I'd guess that they're likely already pushing down those predicates into
the subquery going by the execution times that you posted.
I can't imagine Oracle can perform a seq scan / table scan that much faster
than Postgres
Interested to hear the results of your tests though.
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Training & Services
badly because of this:
-> Seq Scan on fr13t1 (cost=0.00..25072.50
rows=311800 width=21) (actual time=0.007..115.766 rows=311800 loops=1)
Filter: (fr01codemp = '1'::smallint)
Buffers: shared hit=21175
Just how selec
estion has no window clauses, so qual pushdown is not
disabled for that reason.
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Training & Services
performance is better with 32 cores that 64
> cores?.
>
>
You might be interested in
http://www.postgresql.org/message-id/53f4f36e.6050...@agliodbs.com
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL
stick the 50 UUIDs in some table,
analyze it, then perform a join between the 2 tables, using IN() or
EXISTS()... Is that any faster?
Also how well does it perform with: set enable_bitmapscan = off; ?
Regards
David Rowley
case of
queries perform better at execution time?
Is there a valid reason why you don't just write the query with the <=
operator?
Regards
David Rowley
t noticed last night that someone is working on them.
Regards
David Rowley
asking if there's some way to get PostgreSQL to run the 1st query faster?
Or are you asking if both queries are equivalent?
Regards
David Rowley
7;s only the contract_contract table that's looked at. This
is because PostgresSQL sees that none of the columns from the 2 tables
which are being left joined to are used, and also that the columns that
you're joining to on these tables are unique, therefore joining to them
cannot duplicate any rows, and since these are left joined, if there was no
matching row, then it wouldn't filter out rows from the contract_contract
table, as it would with INNER JOINs. The planner sees that these left joins
are pointless, so just removes them from the plan.
Regards
David Rowley
for this is that currently the WHERE clause of the outer query
is not pushed down into the view due to some overly strict code which
completely disallows pushdowns of where clauses into sub queries that
contain windowing functions...
In your case, because you have this id in your partition by clause, then
technically it is possible to push the where clause down into the sub
query. I wrote a patch a while back which lifts this restriction. it
unfortunately missed the boat for 9.4, but with any luck it will make it
into 9.5. If you're up for compiling postgres from source, then you can
test the patch out:
http://www.postgresql.org/message-id/cahoyfk9ihosarntwc-nj5tphko4wcausd-1c_0wecogi9ue...@mail.gmail.com
It should apply to current HEAD without too much trouble.
Regards
David Rowley
in this
situation.
I'd give creating the function index a try, but keep in mind the overhead
that it will cause with inserts, updates and deletes.
Regards
David Rowley
> Thanks,
> Dave
>
On Thu, Aug 29, 2013 at 8:39 AM, wrote:
> Can anyone offer suggestions on how I can optimize a query that contains
> the LIMIT OFFSET clause?
>
> The explain plan of the query is included in the notepad attachment.
>
> thanks
>
>
Before I write anything, I should warn that it has been a while sin
> From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of belal hamed
> Sent: 27 January 2013 13:16
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] PostgreSQL over internet
> by wireshark I monitor TCP packets I found total data tran
I had a similar problem here:
http://archives.postgresql.org/pgsql-bugs/2008-07/msg00026.php
Is the nested loop performing a LEFT join with yours? It's a little
difficult to tell just from the query plan you showed.
A work around for mine was to use a full outer join and eliminate the extra
rows
I once also had a similar performance problem when looking for all matching
rows between two timestamps. In fact that's why I'm here today. The problem
was with MySQL. I had some tables of around 10 million rows and all my
searching was timestamp based. MySQL didn't do what I wanted. I found that
u
33 matches
Mail list logo