On Tue, Sep 3, 2019 at 12:57 AM Jeff Janes wrote:
>
> On Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe
> wrote:
>>
>> On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes wrote:
>> >
>> > Yes, it certainly looks like it is due to cold caches. But you say it is
>> > slow at first, and then say it
On Mon, Aug 26, 2019 at 4:26 AM Barbu Paul - Gheorghe <
barbu.paul.gheor...@gmail.com> wrote:
> On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes wrote:
> >
> > Yes, it certainly looks like it is due to cold caches. But you say it
> is slow at first, and then say it varies greatly during a run. Is
On Sun, Aug 25, 2019 at 5:51 PM Jeff Janes wrote:
>
> Yes, it certainly looks like it is due to cold caches. But you say it is
> slow at first, and then say it varies greatly during a run. Is being slow at
> first the only way it varies greatly, or is there large variation even beyond
>
Hello,
1/ access scheduler_task_executions
by index with device_id = 97
seems ok
2/
I don't understand why
joining
scheduler_task_executions.id=scheduler_operation_executions.task_execution_id
is done using a parallel hash join
when a nested loop would be better (regarding the number
If I restart the PostgreSQL server, then the performance is bad,
several seconds to one or two hundred seconds.
This is reflected in the "buffers read" indicator, which is >0 when
performance is bad for the first "Index Scan using
index_operation_execution_id_asc on
results".
Probably this
with ORDER BY so I get the correct results (163 seconds):
https://explain.depesz.com/s/j3o1
Unique (cost=164620.19..164650.19 rows=4 width=54) (actual
time=163953.091..163954.621 rows=2 loops=1)
Buffers: shared hit=183080 read=103411
-> Sort (cost=164620.19..164635.19 rows=5999 width=54)
That query, if I add the ORDER BY and LIMIT, returns the same results.
The problem is the fact that it behaves the same way regarding its
speed as the original query with the index you suggested.
Sometimes it takes 800ms, sometimes it takes 6s to run, how the hell
can I get it to behave the same
Em 22/08/2019 08:51, Barbu Paul - Gheorghe escreveu:
That query, if I add the ORDER BY and LIMIT, returns the same results.
The problem is the fact that it behaves the same way regarding its
speed as the original query with the index you suggested.
Sometimes it takes 800ms, sometimes it takes
Em 21/08/2019 04:30, Barbu Paul - Gheorghe escreveu:
I wonder how I missed that... probabily because of the "WHERE" clause
in what I already had.
I indexed by scheduler_task_executions.device_id and the new plan is
as follows: https://explain.depesz.com/s/cQRq
Can it be further improved?
I wonder how I missed that... probabily because of the "WHERE" clause
in what I already had.
I indexed by scheduler_task_executions.device_id and the new plan is
as follows: https://explain.depesz.com/s/cQRq
Can it be further improved?
Limit (cost=138511.45..138519.36 rows=2 width=54) (actual
Yes, adding another index might help reduce the number of rows filtered
--> Rows Removed by Filter: 1308337
Also, make sure you run vacuum analyze on this query.
Regards,
Michael Vitale
Luís Roberto Weck wrote on 8/20/2019 10:58 AM:
Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:
Em 20/08/2019 10:54, Barbu Paul - Gheorghe escreveu:
Hello,
I'm running "PostgreSQL 11.2, compiled by Visual C++ build 1914,
64-bit" and I have a query that runs several times per user action
(9-10 times).
The query takes a long time to execute, specially at first, due to
cold caches I think,
Hello,
I'm running "PostgreSQL 11.2, compiled by Visual C++ build 1914,
64-bit" and I have a query that runs several times per user action
(9-10 times).
The query takes a long time to execute, specially at first, due to
cold caches I think, but the performance varies greatly during a run
of the
13 matches
Mail list logo