Just to follow up on this...
Tried increasing stats targets last week + re-analyzing but the query was
just as bad.
Ended up increasing the prepareThreshold to prevent server-side prepares
for now (and thus later generic statements). This 'fixed' the issue and had
no noticeable negative effect for
James Thompson writes:
> The slowness occurs when the prepared statement changes to a generic plan.
> Initial plan:
> -> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1
> table1alias2 (cost=0.56..2549.70 rows=70 width=36) (actual
> time=1.901..45.256 rows=65000 loops=1)
On Tue, May 05, 2020 at 10:10:18PM +0100, James Thompson wrote:
> I've managed to replicate this now with prepared statements. Thanks for all
> the guidance so far.
>
> The slowness occurs when the prepared statement changes to a generic plan.
>
> Initial plan:
> -> Index Only Scan using table1_
I've managed to replicate this now with prepared statements. Thanks for all
the guidance so far.
The slowness occurs when the prepared statement changes to a generic plan.
Initial plan:
-> Index Only Scan using table1_typea_include_uniqueid_col16_idx on table1
table1alias2 (cost=0.56..2549.70 r
On Mon, May 04, 2020 at 02:12:01PM -0500, Justin Pryzby wrote:
> On Mon, May 04, 2020 at 08:07:07PM +0100, Jamie Thompson wrote:
> > Additionally, the execution plans for the 10th + following queries look
> > fine, they have the same structure as if I run the query manually. It's not
> > that the q
On Mon, 2020-05-04 at 20:12 +0100, James Thompson wrote:
> The change is abrupt, on the 10th execution (but I hadn't spotted it was
> always after the
> same number of executions until your suggestion - thanks for pointing me in
> that direction).
>
> I don't see any custom configuration on our
The change is abrupt, on the 10th execution (but I hadn't spotted it was
always after the same number of executions until your suggestion - thanks
for pointing me in that direction).
I don't see any custom configuration on our end that changes the threshold
for this from 5->10. Debugging the query
On Mon, 4 May 2020 at 02:35, James Thompson wrote:
> buffers do look different - but still, reading 42k doesn't seem like it would
> cause a delay of 4m?
You could do: SET track_io_timing TO on;
then: EXPLAIN (ANALYZE, BUFFERS) your query and see if the time is
spent doing IO.
David
On Mon, May 04, 2020 at 08:07:07PM +0100, Jamie Thompson wrote:
> Additionally, the execution plans for the 10th + following queries look
> fine, they have the same structure as if I run the query manually. It's not
> that the query plan switches, it seems as though the same query plan is
> just >
On Sun, May 03, 2020 at 09:58:27AM +0100, James Thompson wrote:
> Hi,
>
> Hoping someone can help with this performance issue that's been driving a
> few of us crazy :-) Any guidance greatly appreciated.
>
> A description of what you are trying to achieve and what results you
> expect.:
> - I'd
Can you please use separate threads for your questions? That is, don't
start new thread by responding to an existing message (because the new
message then points using "References" header, which is what e-mail
clients use to group messages into threads). And use a proper subject
describing the issu
Have you looked at the Nagios XI & Core packages?
https://www.nagios.com/solutions/postgres-monitoring/
On 02/23/2018 12:31 PM, Daulat Ram wrote:
Hello team,
I need help how & what we can monitor the Postgres database via Nagios.
I came to know about the check_postgres.pl script but we
Am 23.02.2018 um 20:31 schrieb Daulat Ram:
Hello team,
I need help how & what we can monitor the Postgres database via Nagios.
I came to know about the check_postgres.pl script but we are using
free ware option of postgres. If its Ok with freeware then please let
me know the steps how I c
13 matches
Mail list logo