int))
>
Those are different queries, so it is not terribly surprising it might
choose a different plan.
For this type of comparison, you need to compare identical queries,
including parameter.
Cheers,
Jeff
and 1+abalance ) >0;
In my originally query I just wrapped the whole thing in another select, so
that I could use the alias rather than having to mechanically repeat the
entire subquery again in the HAVING section. They give identical plans.
Cheers,
Jeff
, is why should I need to do
anything special to avoid dual execution?
Cheers,
Jeff
slows down after couple of
> hours,
> each insert query takes 3000+ ms and keep growing.
>
If it takes a couple hours for it to slow down, then it sounds like you
have a leak somewhere in your code.
Run "top" and see who is using the CPU time (or the io wait time, if that
is what it is, and the memory)
Cheers,
Jeff
rows (unless login is null for all of
them) but the above is estimating there is only one. When was the table
last analyzed?
Cheers,
Jeff
On Sat, Aug 19, 2017 at 10:37 AM, anand086 wrote:
> I am a Postgres Newbie and trying to learn :) We have a scenario wherein,
> one of the SQL w
On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
> \copy local_postresql_table from /tmp/tm
_table) to /tmp/tmp with binary
\copy local_postresql_table from /tmp/tmp with binary
Cheers,
Jeff
in practice are pretty flimsy. I tried to ALTER ... SET
> LOGGED, but that takes a VERY long time and pretty much negates the initial
> performance boost of loading into an unlogged table.
>
Are you using streaming or wal logging?
Cheers,
Jeff
On Wed, Jul 12, 2017 at 3:04 AM, Charles Nadeau
wrote:
> Jeff,
>
> Here are the 2 EXPLAINs for one of my simplest query:
>
It looks like dstexterne and flowcompact are both views over flow. Can you
share the definition of those views?
I think the iowait > 12.5% is due to the
to get 1100 IOPS from 4 10k disks. The average 10k
> disk is going to get around 130 IOPS . If he only has 4 then there is no
> way he is getting 1100 IOPS.
>
I wouldn't be sure. He is using an iodepth of 256 in his benchmark. It
wouldn't be all that outrageous for a disk to be able to find 3 or 4
sectors per revolution it can read, when it has that many to choose from.
Cheers,
Jeff
On Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau
wrote:
> Jeff,
>
> I used fio in a quick benchmarking script inspired by https://smcleod.net/
> benchmarking-io/:
>
> #!/bin/bash
> #Random throughput
> echo "Random throughput"
> sync
> fio --randrepeat=1 --
l is only 43% faster? What job file are
giving to fio?
What do you get if you do something simpler, like:
time cat ~/$PGDATA/base/16402/*|wc -c
replacing 16402 with whatever your biggest database is.
Cheers,
Jeff
On Thu, Jun 29, 2017 at 12:11 PM, Yevhenii Kurtov wrote:
> Hi Jeff,
>
> That is just a sample data, we are going live in Jun and I don't have
> anything real so far. Right now it's 9.6 and it will be a latest stable
> available release on the date that we go live.
) should speed this up massively.
Might want to include status at the end as well. However, your example data
is not terribly realistic.
What version of PostgreSQL are you using?
Cheers,
Jeff
transfer this much data over a UNIX
> socket on the same box?
>
It has to convert the data to a format used for the wire protocol (hardware
independent, and able to support user defined and composite types), and
then back again.
> work_mem = 100MB
Can you give it more than that? How many simultaneous connections do you
expect?
Cheers,
Jeff
rsion of Postgres and
> your distribution does not allow that, there is nothing preventing you
> to do so.
>
But there is something preventing it. wal_segsize cannot exceed 64MB in
9.2. v10 will be the first version which will allow sizes above 64MB.
Cheers,
Jeff
mall for what you are doing.
But if you are not using COPY, then maybe none of this matters as the
bottleneck will be elsewhere.
Cheers,
Jeff
be able to
get better estimates there by doing something like:
a.mapping_code+0 = b.mapping_code+0 AND a.channel=b.channel
(or using ||'' rather than +0 if the types are textual rather than
numerical). But I doubt it would be enough of a difference to change the
plan, but it is an easy thing to try.
Cheers,
Jeff
On Tue, May 23, 2017 at 4:03 AM, Gunnar "Nick" Bluth <
gunnar.bluth.ext...@elster.de> wrote:
> Am 05/22/2017 um 09:57 PM schrieb Jeff Janes:
> >
> > create view view2 as select id,
> > (
> > select md5 from thing_alias where thing_id=id
> >
(
select md5 from thing_alias where thing_id=id
order by priority desc limit 1
) as md5,
cutoff from thing;
Cheers,
Jeff
E also being in pg_stat_statements, but
it is not a sure thing and there is some risk the name got freed and reused.
log_min_duration_statement has the same issue.
Cheers,
Jeff
On Mon, May 15, 2017 at 3:22 PM, Tom Lane wrote:
> Jeff Janes writes:
> > I've tried versions 9.6.3 and 10dev, and neither do what I expected. It
> > doesn't seem to be a planning problem where it thinks the fast plan is
> > slower, it just doesn't seem t
is
slower, it just doesn't seem to consider the faster plans as being options
at all. Is there some setting to make it realize the cast is shippable?
Is any of the work being done on postgres_fdw for V11 working towards
fixing this?
Cheers,
Jeff
drop database foobar;
create database foobar;
_and_object_type suggests that
you are not analyzing (and so probably also not vacuuming) often enough.
Cheers,
Jeff
On Mon, Mar 6, 2017 at 8:46 AM, twoflower wrote:
> Thank you Jeff.
>
> There are 7 million rows satisfying fk_id_client = 20045. There is an
> index on fk_id_client, now I added a composite (fk_id_client, id) index but
> that did not help.
>
With 7 million rows, you shouldn&
the query with *asc* ordering as fast as
> the one with *desc*?
>
You probably can't. Your data us well suited to one, and ill suited for
the other. You can probably make it faster than it currently is, but not
as fast as the DESC version.
Cheers,
Jeff
On Thu, Mar 2, 2017 at 1:19 PM, Sven R. Kunze wrote:
> On 01.03.2017 18:04, Jeff Janes wrote:
>
> On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze wrote:
>
>> On 28.02.2017 17:49, Jeff Janes wrote:
>>
>> Oh. In my hands, it works very well. I get 70 seconds to do th
is execution of this query 15 is times faster than the
execution you found in the log file? Was the top output you showed in the
first email happening at the time the really slow query was running, or was
that from a different period?
Cheers,
Jeff
lso, if you make
both column of both tables be integers, same thing--you get bad estimates
when the join condition refers to one column and the where refers to the
other. I don't know why the estimate is poor, but it is not related to the
types of the columns, but rather the identities of them.
Cheers,
Jeff
On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze wrote:
> On 28.02.2017 17:49, Jeff Janes wrote:
>
> Oh. In my hands, it works very well. I get 70 seconds to do the {age:
> 20} query from pure cold caches, versus 1.4 seconds from cold caches which
> was followed by pg_prewarm(&
On Tue, Feb 28, 2017 at 12:27 AM, Sven R. Kunze wrote:
> On 27.02.2017 19:22, Jeff Janes wrote:
>
> If by 'permanently', you mean even when you intentionally break things,
> then no. You will always be able to intentionally break things. There is
> on-going disc
But, what is it? If you reboot the server
frequently, maybe you can just throw 'select pg_prewarm...' into an init
script?
Cheers,
Jeff
been necessary, just the ALTER
EXTENSION UPDATE should do the trick. Rebuiding a large gin index can be
pretty slow.
Cheers,
Jeff
On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs wrote:
> On 23 January 2017 at 17:12, Jeff Janes wrote:
>
> >> Just to make sure anyone reading the mailing list archives isn't
> >> confused, running pg_start_backup does *not* make PG stop writing to
> >> BA
p.
Which unfortunately obliterates much of the point of using rsync for many
people. You can still save on bandwidth, but not on local IO on each end.
Cheers,
Jeff
ential step for people who run with full_page_writes=off, as it
ensures that anything in base which got changed mid-copy will be fixed up
during replay of the WAL.
Cheers,
Jeff
index still returns 50,000 rows, but now all of them
get rejected upon the recheck.
You could try changing the type of index to jsonb_path_ops. In your given
example, it won't make a difference, because you are actually counting half
the table and so half the table needs to be rechecked. But in my example,
jsonb_path_ops successfully rejects all the rows at the index stage.
Cheers,
Jeff
rk_mem (roughly 48GB).
>
I don't think that this is the true cause of the problem. In current
versions of PostgreSQL, VACUUM cannot make use of more than 1GB of
process-local memory, even if maintenance_work_mem is set to a far greater
value.
Cheers,
Jeff
h t as
(select booking0_.*
from booking booking0_
where booking0_.customer_id in (
select customer1_.id
from customer customer1_
where lower((customer1_.first_name||'
'||customer1_.last_name)) like '%gatef%'
) select * from t order by booking0_.id desc limit 30;
Cheers,
Jeff
Then ponder if it is safe to use that much
work_mem "for real" given your RAM and level of concurrent access.
Cheers,
Jeff
't be part of the
text[]).
The performance of these options will depend on both the nature of your
data and the nature of your queries.
Cheers,
Jeff
or write a fancy archive_command which first archives the files
to a local directory, and then transfers them in chunks to the slave. Or
maybe use streaming rather than file shipping.
Cheers,
Jeff
rap around vacuums
kick in, then all future autovacuum workers are directed to that one
database, starving all other databases of auto-vacuuming. But that doesn't
sound like what you are describing.
Cheers,
Jeff
imits, etc. don't come into play, which I don't think they do here) for
all the different ways of writing the query.
Since that is not happening, the planner must not be able to prove that the
different queries are semantically identical to each other, which means
that it can't pick the other plan no matter how good the estimates look.
Cheers,
Jeff
On Thu, Sep 29, 2016 at 11:48 AM, Sven R. Kunze wrote:
> On 29.09.2016 20:03, Jeff Janes wrote:
>
> Perhaps some future version of PostgreSQL could do so, but my gut feeling
> is that that is not very likely. It would take a lot of work, would risk
> breaking or slowing down ot
On Thu, Sep 22, 2016 at 11:35 PM, Sven R. Kunze wrote:
> Thanks a lot Madusudanan, Igor, Lutz and Jeff for your suggestions.
>
> What I can confirm is that the UNION ideas runs extremely fast (don't have
> access to the db right now to test the subquery idea, but will check
it has no compunction about commanding that it
be read and written, in its entirety, quite often.
Cheers,
Jeff
t but input to my test. Takers?
>
Go through and put one row (or 8kB worth of rows) into each of 8 million
table. The stats collector and the autovacuum process will start going
nuts. Now, maybe you can deal with it. But maybe not. That is the first
non-obvious thing I'd look at.
Cheers,
Jeff
> Is there anything significant I am missing in my reasoning?
>
If you use a reasonably modern version of PostgreSQL (say, >=9.4) , the
overhead of having 1000s of tables should not be too large of a problem.
When get into the 100,000 range, that it is likely to start being a
problem. If you get to 1,000,000, you almost definitely have a problem.
Cheers,
Jeff
indexes on the provided where clauses.
>
> Postgres can do a Bitmap heap scan to combine indexes, there is no need to
> fire two separate queries.
>
It can't combine bitmap scans that come from different tables.
But he can just combine the two queries into one, with a UNION.
Cheers,
Jeff
the indexes which are being rapidly dirtied
exceeds shared_buffers + (some kernel specific factor related
to dirty_background_bytes and kin)
If you think this is the problem, you could try violating the conventional
wisdom by setting shared_buffers 80% to 90% of available RAM, rather than
20% to 25%.
Cheers,
Jeff
ggregates and storing
them in a materialized view (not available in 9.2). Also, more RAM and
better hard-drives can't hurt.
Cheers,
Jeff
his run time could easily be proportional to N^2, if he aggregates more
rows and each one of them is less likely to be a cache hit.
Cheers,
Jeff
On Thu, Aug 18, 2016 at 11:55 AM, Victor Yegorov wrote:
> 2016-08-18 18:59 GMT+03:00 Jeff Janes :
>>
>> Both plans touch the same pages. The index scan just touches some of
>> those pages over and over again. A large setting of
>> effective_cache_size would tell
ery.
Also, with a random_page_cost of 2.5, you are telling it that even
cold pages are not all that cold.
What are the correlations of the is_current column to the ctid order,
and of the loan_id column to the ctid order?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
8:57 2015 -0400
Speed up HeapTupleSatisfiesMVCC() by replacing the XID-in-progress test.
I am not entirely sure why this (as opposed to the previous-mentioned
4162a55c77cbb54) would fix a problem occurring during BIND, though.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-p
ved by the checkpointer. The
logic is quite complex and it can be very frustrating trying to
predict exactly when any given file will get removed. You might want
to run a few manual checkpoints to see if that cleans it up. But turn
on log_checkpoints and reload the configuration first.
Cheers,
J
t write / flush without postgres, I made another C
> program, to open a file in writing, and for 1000 times : write 256 bytes and
> flush them (using fsync in linux and FlushFileBuffers in win).
> Win7: 200 write/sec
> Linux: 100 write/sec
Rather than rolling your own program, can you
On Wed, Jun 22, 2016 at 9:36 PM, Craig James wrote:
> On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes wrote:
>> You might be able to build a multiple column index on (smiles,
>> version_id) and have it do the right thing automatically. Whether that
>> is possible, and if so
possibly contain (according to the CHECK
constraints) the version_ids of interest in the query.
Also, if you tune your system using benzene, you will be probably
arrive at a place not optimal for more realistic queries.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performanc
EB5C8
>
> 2016-06-17 12:45:05 BST LOG: duration: 0.000 ms
>
>
>
> I am guessing these statements are causing an overhead while inserting
> records in to the table.
The fact that there is 3 seconds between the release of one savepoint
the start of the next suggests that yo
bwcyiuzx9mhz-kc...@mail.gmail.com
I hope to give him some help if I get a chance.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
p Index Scan on integrity_synonym_synonym_idx
(cost=0.00..16.10 rows=13 width=0) (actual time=8.847..8.847 rows=99
loops=1)
Index Cond: (synonym ~~ '%BAT%'::text)
Planning time: 18.261 ms
Execution time: 10.932 ms
So it is using the index for the positive match, and filter
ed with the table ordering, then the
parts of the table that need to be visited will be much denser than if
there were no correlation. But Claudio is saying that this is not
being accounted for.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To mak
had another
operator, say %%%, with a hard-coded cutoff which paid no attention to
the set_limit(). I'm not really sure how the planner would deal with
that, though.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscr
just choose that but I'm reluctant to
> turn off nested loops in case the table gets a lot bigger.
A large hash join just needs to divide it up into batches. It should
still be faster than the nested loop (as currently implemented) ,
until you run out of temp space.
But, you already have a sol
= 8192
> lseek(802, 905101312, SEEK_SET) = 905101312
> read(802, "c"..., 8192) = 8192
> lseek(801, 507863040, SEEK_SET) = 507863040
> read(801, "p"..., 8192) = 8192
> lseek(802, 914235392, SEEK_SET
ut the planner is imperfect, and they will have different
estimated selectivities which could easily tip the planner into making
a poor choice for the more selective case. Without seeing the plans,
it is hard to say much more.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
is would do better than a full scan. It
would be interesting to test that.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
way to go, although the construction
of the queries would then be more cumbersome, especially if you will
do by hand.
I think the only way to know for sure is to write a few scripts to benchmark it.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
cceptable to have a 1.5 second query time? There has to be
> lots of other companies out there that manage much more data that needs to
> be sorted for the presentation layer.
There are lots of approaches to solving it. One is to realizing that
none of your customers are actually interested i
On Tue, Mar 22, 2016 at 9:41 AM, Oleg Bartunov wrote:
>
>
> On Sat, Mar 19, 2016 at 5:44 AM, Jeff Janes wrote:
>>
>>
>> I don't see why it would not be possible to create a new execution node
>> type that does an index scan to obtain order (or just to sati
is nonsensical.
You could try to increase cpu_tuple_cost to 0.015 or 0.02
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
database. What does `top`, or
`perf top`, show you about what is going on?
Can you produce a simple perl or python script that reproduces the problem?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
scan to obtain order (or just to satisfy an
equality or range expression), and takes a bitmap (as produced by the
FTS/GIN) to apply as a filter. But, I don't know of anyone planning on
doing that.
Cheers,
Jeff
om that, and tell you that the reason that PostgreSQL is not
using more memory, is that it doesn't think that using more memory
would help.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
going to written sequentially with few fsyncs.
That is ideal for HDD. Even if you also have smaller transactions,
WAL is still sequentially written as long as you have a non-volatile
cache on your RAID controller which can absorb fsyncs efficiently.
Cheers,
Jeff
--
Sent via pgsql-p
on't understand the question. What are the two tasks you are
referring to? Do you have multiple COPY running at the same time in
different processes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
> Cause in theory, if I gave it a id>100 LIMIT 100, it might just as well
> return me results 150 to 250, instead of 100 to 200...
Can you use a method that maintains state (cursor with fetching, or
temporary storage) so that it doesn't have to recalculate the query
for each page?
d to
it (and whatever is needed to set up the schema)?
Thanks,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Dec 10, 2015 at 8:38 PM, ankur_adwyze wrote:
> Hi Folks,
>
> I am a newbie to this mailing list. Tried searching the forum but didn't
> find something similar to the problem I am facing.
>
> Background:
> I have a Rails app with Postgres db. For certain reports, I have to join
> multiple t
total query selectivity.
> Or is it just likely that the selection of the new index is just by chance?
Bingo.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
type, best, block, flag, "row", seat,
> recnum) // (2908 MB)
Why does the index seats_index02 exist in the first place? It looks
like an index designed for the benefit of a single query. In which
case, could flag column be moved up front? That should prevent it
from looki
207874kB"
A lot of that time was probably spent reading the data off of disk so
that it could hash it.
You should turn track_io_timing on, run "explain (analyze, buffers)
..." and then show the entire explain output, or at least also show
the entries downstream of the Hash node.
Che
n at a time, you could unnest ts_vector
and store it in a table like (ts_token text, id_iu bigint). Then
build a regular btree index on (ts_token, id_iu) and get index-only
scans (once you upgrade from 9.1 to something newer)
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-pe
is not in memory. You need more
memory, or some way to keep your memory pinned with what you need. If
you are on a RAID, you could also increase effective_io_concurrency,
which lets the bitmap scan prefetch table blocks.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
robably have to rewrite it into C. But that would be a drag, and I
would try just throwing more CPU at it first.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
x27;sum' function which takes array arguments.
> select sum('{1,2,3,4,5,6}'::int[]);
ERROR: function sum(integer[]) does not exist
Are you using a user defined function? If so, how did you define it?
Cheers,
Jeff
On Mon, Oct 12, 2015 at 11:17 AM, Shaun Thomas wrote:
> Hi guys,
>
> I've been doing some design investigation and ran into an interesting snag
> I didn't expect to find on 9.4 (and earlier). I wrote a quick python script
> to fork multiple simultaneous COPY commands to several separate tables an
epending on a lot of things like the size of the
index, the size of ram and shared_buffers, the number of spindles in your
RAID, the amount of parallelization in your insert/update activity, and the
distribution of "keys" among the data you are inserting/updating.
Cheers,
Jeff
ement does discard
statements sometimes, discarding the counts with them. You would have set
pg_stat_statements.max to a higher value than you ever expect to get
reached.
Cheers,
Jeff
On Wed, Sep 2, 2015 at 7:00 AM, Volker Böhm wrote:
>
>
> CREATE INDEX trgm_adresse ON adressen.adresse USING gist
> (normalize_string((btrimnormalize_stringCOALESCE((vorname)::text,
> ''::text) || ' '::text) || (name1)::text))::character varying,
> (-1)))::text || ' '::text) || (nor
each retrieved constant value:
explain (analyze,buffers) select routes.route from routes where route && $1
Does each one take about the same amount of time, or are there some outlier
values which take much more time than the others?
Cheers,
Jeff
On Mon, Aug 24, 2015 at 8:18 AM, Guo, Yun wrote:
>
>
> From: Jeff Janes
> Date: Friday, August 21, 2015 at 10:44 PM
> To: Yun
> Subject: Re: [PERFORM] query not using GIN index
>
> On Fri, Aug 21, 2015 at 6:55 PM, Guo, Yun wrote:
>
>> Hi,
>>
>> We
On Fri, Aug 14, 2015 at 9:54 AM, Jeff Janes wrote:
> On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus wrote:
>
>>
>> On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the
>>
>> > Once the commit of the whole-table update has replayed, the problem
>> &g
On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus wrote:
>
> On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the
>
> > Once the commit of the whole-table update has replayed, the problem
> > should go way instantly because at that point each backend doing the
> >
of the whole-table update is delayed because the
startup process as also getting bogged down on the same contended lock? I
don't know how hard WAL replay hits the proc array lock.
Cheers,
Jeff
generator, it is pretty easy to spin up a
bunch of idle connections and see what happens on your own hardware with
your own workload and your own version of PostgreSQL.
Cheers,
Jeff
On Fri, Jul 24, 2015 at 2:40 PM, Laurent Debacker
wrote:
The Recheck Cond line is a plan-time piece of info, not a run-time piece.
> It only tells you what condition is going to be rechecked if a recheck is
> found to be necessary.
Thanks Jeff! That makes sense indeed.
>
> I'
appreciated. I'm running PostgreSQL 9.4.1.
>
The Recheck Cond line is a plan-time piece of info, not a run-time piece.
It only tells you what condition is going to be rechecked if a recheck is
found to be necessary.
It doesn't indicate how many times it was found it to be necessary to do
the recheck. Presumably that number was zero.
Cheers,
Jeff
1 - 100 of 878 matches
Mail list logo