the performance you paid for.
You might be able to ameliorate matters by raising shared_buffers, but
unless your database isn't growing that approach has limited future.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
interest the OP's results on other JSON processors that
have no issues with GB-sized JSON strings.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
er you're trying to change the row's key fields,
and it will proceed if you aren't. SELECT FOR UPDATE has to lock the
whole row (since it must assume you might be intending to change any
fields of the row); so it blocks until the FK lock goes away.
regards, tom lane
--
Sent
? How? The index ordering has nothing to do with the order in
which heap tuples will be visited.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
While a GIN array index supports that, it's not exactly
its strong suit: the sort of questions that index type supports well are
more like "which arrays contain value X?". I wonder if it'd be worth
creating btree indexes on the array column.
regards, tom lan
rt to the main table.
The second choice is probably preferable; doing bulk GIN inserts
without fastupdate is kind of expensive itself.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
moving only to 9.2 and not something more
recent.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
insertion point would become deterministic. I am not sure if that's
good or bad for insertion performance, but it would likely help for
scan performance.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make change
the index.
With a reasonably late-model PG (9.4+), you might well have better luck
with a regular-expression pattern than a LIKE pattern, because more work
has been put into pg_trgm's heuristics for choosing which trigrams to use
for regexes.
(Not sure why it didn't occur to us to make that code appl
. I don't know if that's for lack of round tuits or because
it's actually hard, but it's not the planner's fault.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
Andreas Joseph Krogh <andr...@visena.com> writes:
> PÃ¥ torsdag 17. mars 2016 kl. 18:20:23, skrev Tom Lane <t...@sss.pgh.pa.us
> FWIW, the reason for that is that the int8_ops operator class that
> btree_gin creates doesn't contain any cross-type operators. Probably
>
y to use the index at all for sorting?
No.
> 4. It doesn't seem like ts_rank uses the index for sorting either.
Same reason.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.po
James Parks <james.pa...@meraki.net> writes:
> On Mon, Feb 29, 2016 at 5:22 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> The other explain shows a scan of "a" reading about 490k rows and
>> returning 395 of them, so there's a factor of about 200 re-read here.
&
Merlin Moncure <mmonc...@gmail.com> writes:
> On Mon, Feb 29, 2016 at 12:47 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> FWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get
>> the same plan with or without it. But that does act as an optimizatio
if that indexscan
was estimated at 26163.20 units, how'd the mergejoin above it get costed
at only 7850.13 units? The answer has to be that the planner thought the
merge would stop before reading most of "a", as a result of limited range
of b.a_id. It would be interesting to look into what
and it caps some optimizations
> available for EXISTS, you'd better avoid it and see if you get a proper
> semi-join plan then.
FWIW, PG >= 9.5 will ignore a LIMIT 1 inside an EXISTS, so that you get
the same plan with or without it. But that does act as an optimization
fence in earlie
ly one row for
each outer row, which makes me wonder if the BETWEEN couldn't be replaced
with some sort of equality. But that might take some rethinking of the
data.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y
ushed when it exceeds
work_mem. (Obviously, using a large work_mem setting makes this worse.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y. You would have a lot of other
performance issues with sending hundreds of thousands of distinct notify
events from one transaction anyway, so I can't get terribly excited about
this.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postg
onditions. You might see if you can improve the
statistics for the search_vector column.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
le Scan on func f (cost=0.00..0.20
rows=10 width=4)
which looks hairier, but that's because the function has been inlined
which is usually what you want for a SQL-language function. The join
is happening the way you want.
regards, tom lane
--
Sent via pgsql-performan
fix this would be appreciated.
> This is odd.
> Could you profile the backend during such a statement to see where the time
> is spent?
I'm wondering about locks. Perhaps turning on log_lock_waits would
yield useful info.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y attempt to correlate the slow queries with spikes in load
average, swap activity, etc?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
sure whether modern kernels have improved this area.
I think you can get an idea of how big a problem you have by noting
the accumulated runtime of the khugepaged daemon.
(BTW, it would likely be a good thing to collect some current wisdom
in this area and add it to section 17.4 of our docs.)
fied I/O bandwidth?
> I will look at changing the deadlock_timeout, but that might have to wait for
> the weekend since this is a production system.
You needn't restart the server for that, just edit postgresql.conf and
SIGHUP the postmaster.
regards, tom lane
--
Sent
eanup but we can't tell
on the basis of this much info.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Scott Rankin <sran...@motus.com> writes:
> On 1/6/16, 10:38 AM, "Tom Lane" <t...@sss.pgh.pa.us> wrote:
>> A possible theory is that the slow cases represent times when the desired
>> page is not in cache, but you'd have to have a seriously overloaded disk
t;hot" and unswappable from the kernel's perspective). Or reduce the
number of active backend processes.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
n't want to rebuild the whole database, you can create indexes to
support this by declaring them with COLLATE "C", or the older way is to
declare them with text_pattern_ops as the index opclass.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-per
Jim Nasby <jim.na...@bluetreble.com> writes:
> On 12/31/15 9:02 AM, Tom Lane wrote:
>> If you don't want to rebuild the whole database, you can create indexes to
>> support this by declaring them with COLLATE "C", or the older way is to
>> declare them with
a two-column foreign key constraint on (id, tag).) Then the
query would look like
SELECT * FROM t1 INNER JOIN t2 ON t1.tag = t2.tag WHERE t2.tag = ''
and since the planner is smart enough to deduce t1.tag = '' from
that, it would arrive at the correct estimate for any particular tag.
an() without too much overhead. I'm
not feeling motivated to work on this myself, absent a more convincing
explanation of why we should expend any effort to support this query
pattern. But if anyone else is, have at it.
regards, tom lane
--
Sent via pgsql-performance mai
t of time, use a connection pooler to do it for you
(pgpooler for instance).
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Glyn Astill <glynast...@yahoo.co.uk> writes:
>> From: Tom Lane <t...@sss.pgh.pa.us>
>> The problem will probably go away by itself as your table grows, but
>> if you don't want to wait, you might want to reflect on which of the index
>> columns might be
be large,
that's not going to be a workable hack for your case.
Probably the only thing that's going to work for you is to store
md5(attachment_bytes) in its own plain column (you can use a trigger
to compute it for you), and then build a regular index on that,
and query for that column not the md5() e
ompliant RDBMS. And it's just
ugly at a code level too: the raw grammar is not the place to encode
implementation decisions like these.
I've had it on my to-do list for awhile to replace those things with
some new expression node type, but haven't got round to it.
regards, tom
ok more like
create table foo(i integer primary key);
create table bar(j integer primary key, i integer references foo);
create index on bar(i);
which would provoke the warning. I fear a warning like that would have
a very short life expectancy.
regards, tom lane
Merlin Moncure <mmonc...@gmail.com> writes:
> On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
>> If you're not sure which table is the problem, try doing an EXPLAIN
>> ANALYZE of a DELETE that will only remove a few rows. You should
>>
r associated with the FK constraint.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y choose to use these indexes in its plan; but their existence
will prompt ANALYZE to gather stats about the expression results,
and that should at least let the planner draw more-accurate conclusions
about the selectivity of the equality constraint.
regards, tom lane
--
Sen
ondition could be enforced at the inner indexscan
instead of the join.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
y that the first query matched 135843 rows and the
second one none at all, so a significant variation in runtime doesn't seem
that surprising to me ...
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to yo
l us whether or not a different plan is being
used for the query inside the function.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
IT clauses inside
EXISTS(), because people keep writing them even though they're useless.
Earlier releases do not have that code though.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
been prior discussion along
the same lines as you mention (only believe 1-row estimates when it's
provably true that there's at most one row), but it hasn't looked like an
easy change. See the pgsql-hackers archives for previous threads.
regards, tom lane
--
Sent via pgs
all
the conditions can be checked in the index.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
me will kill you.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
haky because it's hard to predict how many rows will get
skipped before finding one with b_id=42.
If you do this type of query often enough to care about its performance,
you could consider creating a two-column index on (b_id, created)
(in that order).
regards, tom l
this?
You could watch how fast the target table is physically growing, perhaps.
Or I think contrib/pgstattuple could be used to count uncommitted tuples
in it.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
that these are bigints, so that won't
work.
regards, tom lane
from
The array manipulation part is the performance bottleneck. I am pretty sure,
that there is a better way of doing this, however I couldn't find one.
What I have is two
be a lot better off with a pooler.
(There has been, and continues to be, interest in getting rid of this
bottleneck ... but it's a problem in all existing Postgres versions.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
plpgsql
code out there, and it can make a huge difference for that.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
a better plan, but it's worth trying if it wouldn't
require too much app-side contortion.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
...
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
on OS-level disk caching. Some have found that very large
shared buffer pools tend to increase contention without much payback.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
. Hard to opine further since no details about the CTEs were
provided.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that particular misbehavior?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
(see the impact of
loop_count on cost_index).
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, this would all be moot if the cost estimate for the nestloop
plan were nearer to reality. Since you started a separate -hackers
thread for that issue, let's go discuss that there.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
or newer should know how to do this
with a merge append of several indexscans.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Dave Johansen davejohan...@gmail.com writes:
On Fri, May 22, 2015 at 3:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
What PG version are you using? 9.1 or newer should know how to do this
with a merge append of several indexscans.
Sorry, I should have mentioned that in the original email. I'm
than you've let on.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
worth saving users from
hand-optimizing such cases depends a lot on what it's going to cost in
added planning time for queries that don't get any benefit. This example
doesn't look like a case that's going to win that cost/benefit tradeoff
comparison.
regards, tom lane
and not expect to scan too many index items
that don't satisfy the original extract() condition. But I don't see how
to make something like that work for mapping case-insensitive searches
onto case-sensitive indexes.
regards, tom lane
--
Sent via pgsql-performance mailing list
approach. At the very least, if you need a many-armed
CASE, it behooves you to make sure the common cases appear early.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Kevin Grittner kgri...@ymail.com writes:
Tom Lane t...@sss.pgh.pa.us wrote:
But the other problem is that the planner considers less-than-1%
differences in cost estimates to be in the noise, which means
that it's not going to consider cost differences of less than
1480 units in the remaining
.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
not in the list.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
allocations.resource_id
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
it actually is
a significant win.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
immediately when it's exhausted the rows with type = 'vehicle'.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
suspicious that the underlying cause might have to do with recent
versions being warier about optimizing sub-selects containing volatile
functions than 8.4 was. However, that theory doesn't seem to explain
the horribly bad join size estimates you're showing.
regards, tom lane
apply at the top plan level so they're unlikely
to change any planner choices. Moreover, for any case other than the
not-terribly-interesting constant FALSE case, we're better off assuming
that the filter condition will be true (and so there's nothing to adjust).
regards, tom
% of your table has vid = 1, then there would be some point
in trying to use an index to find the other 1%; but you'll have to
formulate the query differently (perhaps vid 1 would do?) or else
use a properly-designed partial index.
regards, tom lane
--
Sent via pgsql
the
planner's viewpoint it'd be easy enough to fall back to using whatever
it had in the histogram after all. But that's all happening down inside
index_getnext, and I'm hesitant to stick some kind of wart into that
machinery for this purpose.
regards, tom lane
--
Sent via
is showing
any stress. Dead rows should be a hazard for anything, especially if
there are enough of them to require hours to re-hint. And why wouldn't
autovacuum get to them first?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance
Tomas Vondra tomas.von...@2ndquadrant.com writes:
On 6.3.2015 01:44, Tom Lane wrote:
I'm wondering about the issue addressed by commit fccebe421 (Use
SnapshotDirty rather than an active snapshot to probe index
endpoints).
How would fccebe421 explain the large amount of random writes (~4MB/s
'), that'd be useful.
Without some more-precise idea of where the time is going, we're really
just guessing as to the cause.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
built-in for that (and frankly, it doesn't sound useful
enough that we'd ever add it). You could get the effect easily enough
with an expression index on a byte-reversing function. A related thing
that people often do is create an index on a hash function.
regards, tom
in the formulation of the submitted query.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
then they should not
be changing once the tuples have been frozen the first time. If this is
incurring continuing rsync work then something else is going on.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
after that to format and
display the query result. PGAdmin is probably measuring the query time
differently.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org
.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
...)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
of thumb would work better :-(
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
on the basis of a single
example; you might find that it makes other plan choices worse.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
by taking
enormously large samples would be the number-of-distinct-values estimate.
There's already a way you can override ANALYZE's estimate of that number
if you need to.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in
index size. Have you neglected to mention some nondefault planner cost
settings?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
much to do with reality for this specific query.)
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
about that. In the meantime, if you want to stick with this
partitioning design, couldn't you improve that code so the UPDATE is
only applied to the one child table it's needed for?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance
an investigatable problem statement.
I will note that EXCEPTION blocks aren't terribly cheap, so if you're
reaching the EXECUTE sql_insert a lot of times that might have something
to do with it.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance
with the former as being
more understandable to someone who knows standard SQL.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
Alternatively, you might want to reconsider the concept of updating
hundreds of millions of rows in a single operation ...
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
?
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
regards, tom lane
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Andrew Dunstan and...@dunslane.net writes:
On 12/12/2014 04:44 PM, Tom Lane wrote:
Well, it'd be easy to fix if we were willing to invent distinct operators
depending on which type you wanted out (perhaps - for text output as
today, add -# for numeric output, etc).
That was my immediate
101 - 200 of 3915 matches
Mail list logo