Michal Charemza writes:
> Tom Lane writes:
>> I'm fairly sure that if that exists it's always noticed first,
>> bypassing the need for any role membership tests. So please
>> confirm whether your production database has revoked PUBLIC
>> connect privilege.
> I r
Michal Charemza writes:
> Tom Lane writes:
>> It's not very clear what you mean by "sometimes". Is the slowness
> reproducible for a particular user and role configuration, or does
> it seem to come and go by itself?
> Ah it's more come and go by itself - as in one
tching 14e991db8
further than v16 ... but I don't plan to take any risk there without
concrete evidence that it'd improve things.
regards, tom lane
d that the query gets any faster.
regards, tom lane
s a bad idea.
regards, tom lane
an equality condition, no. It's pretty much
of a hack that makes it an indexable condition at all, and we don't
really do any advanced optimization with it.
regards, tom lane
les that.)
To conclude that it'd be safe with this particular window function
requires deep knowledge of that function's semantics, which the
planner has not got.
regards, tom lane
that are needed.
It depends. The planner may choose to tell a non-top-level scan node
to return all columns, in hopes of saving a tuple projection step at
runtime. That's heuristic and depends on a number of factors, so you
shouldn't count on it happening or not happening.
regards, tom lane
aced as well; we don't
do half-custom plans.)
regards, tom lane
f that's
meaningfully slower -- if not, replanning each time is deemed to be
wasteful.
regards, tom lane
Michael Paquier writes:
> On Mon, Dec 04, 2023 at 09:57:24AM -0500, Tom Lane wrote:
>> Jerry Brenner writes:
>>> Both Oracle and SQL Server have
>>> consistent hash values for query plans and that makes it easy to identify
>>> when there are multiple plans fo
plain
> plan)?
No, there's no support currently for obtaining a hash value that's
associated with a plan rather than an input query tree.
regards, tom lane
it is a real problem.
PG 11 is out of support as of earlier this month, so your users really
need to be prioritizing getting onto more modern versions.
regards, tom lane
able WHERE ctid = '(0,1)';
to see the previous state of the problematic tuple. Might
help to decipher the problem.
regards, tom lane
a bloated index, and that's driving
up the estimated cost enough to steer the planner away from it.
regards, tom lane
le beyond speculation.
regards, tom lane
column in all our indexes to make them work
> under RLS?
Adding tenant_id is going to bloat your indexes quite a bit,
so I wouldn't do that except in cases where you've demonstrated
it's important.
regards, tom lane
t you're using some
extension that isn't happy.
regards, tom lane
page, FreeBSD defines en_US
as "same as C except case-insensitive", whereas I'm pretty sure
that underscores and other punctuation are nearly ignored in
glibc's interpretation; they'll only be taken into account if the
alphanumeric parts of the strings sort equal.
regards, tom lane
using different collations on the two systems, and FreeBSD's collation
happens to place the first matching row earlier in the index.
regards, tom lane
ove the needle greatly, it still
seems sloppy.
regards, tom lane
n try to dump the whole database
> and restore it on another machine.
Pretty hard to believe that dump-and-restore would be faster than
VACUUM.
> (Is there a way to check the number of dead rows?)
I think contrib/pgstattuple might help.
regards, tom lane
ingle
> (random?) entry from an index should not run for >10 minutes.
You should believe what EXPLAIN tells you about the plan shape.
(Its rowcount estimates are only estimates, though.)
regards, tom lane
s from the PK table ...
regards, tom lane
the merge side.
Hmm. The planner should avoid using a merge join if it knows that
to be true. Maybe analyze'ing that table would prompt it to use
some other join method?
regards, tom lane
If the sort is the inner input to a merge join, this could reflect
mark-and-restore rescanning of the sort's output. Are there a
whole lot of duplicate keys on the merge's other side?
regards, tom lane
> 17097 | regexp_replace | oracle | f | f | v
> 17098 | regexp_replace | oracle | f | f | v
Why in the world are the oracle ones marked volatile? That's what's
preventing them from being used in index quals.
regards, tom lane
ch a fragmentary
description of the problem. Please send a complete, self-contained
test case if you want anybody to look at it carefully.
https://wiki.postgresql.org/wiki/Slow_Query_Questions
regards, tom lane
trigger. Perhaps
auto_explain with auto_explain.log_nested_statements enabled
would give some insight.
I suspect there might be a permissions problem causing schema1_u
to not be allowed to "see" the statistics for table_b, resulting
in a bad plan choice for the FK enforcement query; but that's
just a gue
case on your ARM board.
regards, tom lane
really if performance is a
problem you should think about ditching the star schema design.
regards, tom lane
's gotten
around to implementing that in Postgres AFAIK.
regards, tom lane
ed string constants will tend to
get resolved to that.)
regards, tom lane
u do need strict ACID compliance, get a better disk subsystem.
Or, perhaps, just a better OS ... Windows is generally not thought of
as the best-performing platform for Postgres.
regards, tom lane
ssive TCP keepalive
parameters might help.
regards, tom lane
guaranteed to satisfy the domain check, because the
> domain check is guaranteed to be immutable (per [1] in my original mail)
immutable != "will accept null".
There could be some more optimizations here, perhaps, but there aren't.
regards, tom lane
TER begins
waiting.
regards, tom lane
Note that changing planner parameters on the basis of a single
query getting slower is a classic beginner error. You need
to think about the totality of the installation's workload.
regards, tom lane
regards, tom lane
require a vastly more complicated implementation.
regards, tom lane
K rows.
You need a better index. It might be that switching to a jsonb_path_ops
index would be enough to fix it, or you might need to build an expression
index matched specifically to this type of query. See
https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING
Also, if any of the terminology there doesn't make sense, read
https://www.postgresql.org/docs/current/indexes.html
regards, tom lane
h rule of thumb that was invented for far smaller machines than
what you're talking about here.
regards, tom lane
o help it along with
UNION or some similar locution.
regards, tom lane
y to have motivation to improve things.
regards, tom lane
hese estimators
better, have at it.
regards, tom lane
[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob_plain;f=src/backend/utils/adt/geo_selfuncs.c;hb=HEAD
ith that. We really need to prioritize fixing the
cost-estimation problems, and/or tweaking the default thresholds.
regards, tom lane
s bugs in the cost-estimation algorithms
for deciding when to use it. A nearby example[1] of a sub-1-sec
partitioned query that took 30sec after JIT was enabled makes me
wonder if we're accounting correctly for per-partition JIT costs.
regards, tom lane
[1]
here the hash index wins. I'm not sure if it's quite
fair to give hash a zero startup cost; but it doesn't have to
descend a search tree, so it is fair that its startup cost is
less than btree's.
regards, tom lane
oins
> could be avoided.
I believe only left joins to single tables can be elided ATM.
It's too hard to prove uniqueness of the join key in more-
complicated cases.
regards, tom lane
ilently does nothing on platforms lacking
ADDR_NO_RANDOMIZE and PROC_ASLR_FORCE_DISABLE. Are you asserting
there are no such platforms?
(I'm happy to lose the comment if it's really useless now, but
I think we have little evidence of that.)
regards, tom lane
Thomas Munro writes:
> On Tue, Aug 23, 2022 at 4:57 AM Tom Lane wrote:
> +service the requests, with those clients receiving unhelpful
> +connection failure errors such as Resource temporarily
> +unavailable.
> LGTM but I guess I would add "... o
mebody making a well-reasoned case for
some other number.
regards, tom lane
te definition for the listen queue
length is now MaxConnections * 2, not MaxBackends * 2, because
the other processes counted in MaxBackends don't correspond to
incoming connections.
I propose 0003 for HEAD only, but the docs changes could be
back-patched.
regards, tom lan
ent at least three different
sysctl names for this setting :-(
regards, tom lane
Thomas Munro writes:
> On Mon, Aug 22, 2022 at 12:20 PM Tom Lane wrote:
>> Hmm. It'll be awhile till the 128 default disappears entirely
>> though, especially if assorted BSDen use that too. Probably
>> worth the trouble to document.
> I could try to write a doc pat
eve any modern kernel can't defend itself against
silly listen-queue requests.
regards, tom lane
se for this particular undocumented use
of EAGAIN, but I'm worried about others, especially the documented
reason. On the whole I'm inclined to leave the code alone;
but is there sufficient reason to add something about adjusting
somaxconn to our documentation?
regards, tom lane
Andrew Dunstan writes:
> On 2022-08-21 Su 17:15, Tom Lane wrote:
>> On the whole this is smelling more like a Linux kernel bug than
>> anything else.
> *nod*
Conceivably we could work around this in libpq: on EAGAIN, just
retry the failed connect(), or maybe better to close t
Andrew Dunstan writes:
> On 2022-08-20 Sa 23:20, Tom Lane wrote:
>> Kevin McKibbin writes:
>>> What's limiting my DB from allowing more connections?
> The first question in my mind from the above is where this postgres
> instance is actually listening. Is it really
ctions you want. Since you haven't
mentioned what platform this is on, it's impossible to say more
than that --- but it doesn't look like Postgres configuration
settings are at issue at all.
regards, tom lane
_statements.max from 10k to 3k
Whether or not we've fully identified the problem, I think cutting
pg_stat_statements.max is a good idea. Especially as long as you're
stuck on an unsupported PG version.
regards, tom lane
after five thousand or so of those.
I'm also curious whether this installation is in the habit of doing
pg_stat_statements_reset() a lot. It looks like that fails to
reset mean_query_len, which might be intentional but perhaps it
could play into getting a silly result here later on.
regards, tom lane
you quite
sure this is a 9.5.21 version of the pg_stat_statements extension?
Is it possible that the pg_stat_tmp directory has been made non-writable?
regards, tom lane
> Are there any other signs of distress
> in the postmaster log, like complaints about
n come
to be. Do you have a lot of especially long statements being tracked
in the pg_stat_statements view? Are there any other signs of distress
in the postmaster log, like complaints about being unable to write
pgss_query_texts.stat?
regards, tom lane
save overhead later.
regards, tom lane
Alter
> table set xxx logged 6. Create index …
The easy answer is to skip steps 3 and 5.
regards, tom lane
ead.
regards, tom lane
than it is for SELECT.
regards, tom lane
ass [1]. I'm not sure if that'd be
faster for this scenario, but it seems worth trying.
regards, tom lane
[1] https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING
ould help with that. It's easier to read too.
regards, tom lane
"James Pang (chaolpan)" writes:
> 1. extensions
> shared_preload_libraries =
> 'orafce,pgaudit,pg_cron,pg_stat_statements,set_user'
Can you still reproduce this if you remove all of those?
regards, tom lane
ot sure whether that's relevant here.
One thought is that if there is a pg_statistic row but it contains
no MCE list, we could assume that the column elements are all distinct
and see what sort of estimate that leads us to.
regards, tom lane
it's gone very far in this index. If the shipping date and
pkey are correlated in the wrong direction, that could be a very
overoptimistic guess. I don't think we have adequate stats yet
to detect this sort of problem.
regards, tom lane
e made a number of changes to the LISTEN/NOTIFY
code since then; although in reading the commit log entries about them,
nothing is said about long-delayed notifications.
regards, tom lane
estimated size of the hash
table exceeds work_mem. In this case, boosting work_mem would be
a mighty good idea.
regards, tom lane
r at least, it wouldn't tempt the planner to try this unstably-
performing plan. It's trying to use the index ordering to satisfy
the ORDER BY, which works great as long as it finds a dataview
match in some reasonably recent index entry. Otherwise, it's
going to crawl the whole index to discover that there's
riends [1]. Depending on what you mean by
"final transformed query", you might instead want debug_print_rewritten,
or maybe you want the plan, in which case EXPLAIN is a much friendlier
way to look at it than debug_print_plan.
regards, tom lane
[1]
https://www.
n distribute these rows to parallel workers)
Your plan-shape complaint had nothing to do with insertions; it had
to do with joining the partitioned table to another table. That
join can't be optimized.
regards, tom lane
he
reason that it's impossible to prune hash partitions.)
regards, tom lane
es,
as well as EXPLAIN's output for this query. I'm wondering
exactly which PG version this is, too.
regards, tom lane
ut together a self-contained test case that
demonstrates what you're seeing?
regards, tom lane
nk of your
problem, in which case physically grouping multiple rows into each INSERT
command (... or COPY ...) is the only way to fix it. But I'd start with
trying to reduce the transaction commit overhead.
regards, tom lane
=?UTF-8?Q?Marc_Recht=c3=a9?= writes:
> Le 03/03/2022 à 16:31, Tom Lane a écrit :
>> Does memory consumption hold steady if you drop the FK constraints?
> Actually the number of rows is 232735712.
> Accordingly the RAM consumption would be x12 x3 = 7.8 GiB.
> This is close to t
able, so maybe you could make a small self-contained example
using a script to generate dummy data.
regards, tom lane
d or pg_upgrade.)
regards, tom lane
is giving you a textual sort of the sizes, which is entirely
unhelpful. Try
SELECT relname, pg_size_pretty(pg_relation_size(C.oid)) FROM pg_class C LEFT
JOIN pg_namespace N ON (N.oid = C.relnamespace) WHERE nspname = 'pg_catalog'
ORDER BY pg_relation_size(C.oid) DESC LIMIT 20;
regards, tom lane
Marc Millas writes:
> checkpoint completion target is 0.9
checkpoint_timeout is the more interesting number here.
regards, tom lane
the write phase. If the I/O
is being throttled because of an un-aggressive checkpoint completion
target, it could be a lot more than the actual I/O time. What have
you got your checkpoint parameters set to?
regards, tom lane
ure you realize that 9.6.15 is not exactly current.
regards, tom lane
ne, so it's not surprising if storage-level
behaviors are quite different from stock Postgres.
regards, tom lane
"David G. Johnston" writes:
> On Monday, December 20, 2021, Tom Lane wrote:
>> It would help if somebody had labeled the units of I/O Time
>> ... but I'm guessing those are microsec vs. the millisec
>> of the other times, because otherwise it's completely wr
s
> are ~500sec or ~6000sec ?
It would help if somebody had labeled the units of I/O Time
... but I'm guessing those are microsec vs. the millisec
of the other times, because otherwise it's completely wrong.
regards, tom lane
rkers_per_gather = 0) to see if
that's confusing the numbers somehow.
regards, tom lane
[1]
https://www.postgresql.org/docs/current/storage-page-layout.html#STORAGE-TUPLE-LAYOUT
longer query means more work in the seqscan case too; but our cost
models are inadequate to predict that.)
regards, tom lane
have any
actual evidence of a performance problem here?
regards, tom lane
s the row update itself.
regards, tom lane
and seeing if there seems
to be any correlation between autovacuum/autoanalyze activity
and the occurrences of poor plan choices.
regards, tom lane
n't it using that?
regards, tom lane
ou can build without it --- easiest way would be to modify
pg_config.h after the configure step. But the idea that it has
something to do with lock contention seems like nonsense.
regards, tom lane
bug: it's expected that the second query will evaluate
random() only once.
In the case at hand, I suspect you're getting a "hashed subplan"
in one query and not the other. The depesz.com display doesn't
really show that, but EXPLAIN VERBOSE would.
regards, tom lane
1 - 100 of 354 matches
Mail list logo