>
Those are different queries, so it is not terribly surprising it might
choose a different plan.
For this type of comparison, you need to compare identical queries,
including parameter.
Cheers,
Jeff
nce
between -1-abalance and 1+abalance ) >0;
In my originally query I just wrapped the whole thing in another select, so
that I could use the alias rather than having to mechanically repeat the
entire subquery again in the HAVING section. They give identical plans.
Cheers,
Jeff
ed to do
anything special to avoid dual execution?
Cheers,
Jeff
e at first, but getting slows down after couple of
> hours,
> each insert query takes 3000+ ms and keep growing.
>
If it takes a couple hours for it to slow down, then it sounds like you
have a leak somewhere in your code.
Run "top" and see who is using the CPU time (or the io wait time, if that
is what it is, and the memory)
Cheers,
Jeff
there are 762599 rows (unless login is null for all of
them) but the above is estimating there is only one. When was the table
last analyzed?
Cheers,
Jeff
On Sat, Aug 19, 2017 at 10:37 AM, anand086 <anand...@gmail.com> wrote:
> I am a Postgres Newbie and trying to learn :) We ha
On Tue, Aug 15, 2017 at 3:06 AM, Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> Hi,
> So I I run the cheks that jeff mentioned :
> \copy (select * from oracle_remote_table) to /tmp/tmp with binary - 1 hour
> and 35 minutes
> \copy local_postresql_table from /tmp/tm
/tmp/tmp with binary
\copy local_postresql_table from /tmp/tmp with binary
Cheers,
Jeff
imsy. I tried to ALTER ... SET
> LOGGED, but that takes a VERY long time and pretty much negates the initial
> performance boost of loading into an unlogged table.
>
Are you using streaming or wal logging?
Cheers,
Jeff
On Wed, Jul 12, 2017 at 3:04 AM, Charles Nadeau <charles.nad...@gmail.com>
wrote:
> Jeff,
>
> Here are the 2 EXPLAINs for one of my simplest query:
>
It looks like dstexterne and flowcompact are both views over flow. Can you
share the definition of those views?
I think
wise, rather than random reads.
> but he also isn't going to get 1100 IOPS from 4 10k disks. The average 10k
> disk is going to get around 130 IOPS . If he only has 4 then there is no
> way he is getting 1100 IOPS.
>
I wouldn't be sure. He is using an iodepth of 256 in his benchmark. It
wouldn't be all that outrageous for a disk to be able to find 3 or 4
sectors per revolution it can read, when it has that many to choose from.
Cheers,
Jeff
On Tue, Jul 11, 2017 at 4:02 AM, Charles Nadeau <charles.nad...@gmail.com>
wrote:
> Jeff,
>
> I used fio in a quick benchmarking script inspired by https://smcleod.net/
> benchmarking-io/:
>
> #!/bin/bash
> #Random throughput
> echo "Random throughput"
at doesn't seem right. Sequential is only 43% faster? What job file are
giving to fio?
What do you get if you do something simpler, like:
time cat ~/$PGDATA/base/16402/*|wc -c
replacing 16402 with whatever your biggest database is.
Cheers,
Jeff
On Thu, Jun 29, 2017 at 12:11 PM, Yevhenii Kurtov <yevhenii.kur...@gmail.com
> wrote:
> Hi Jeff,
>
> That is just a sample data, we are going live in Jun and I don't have
> anything real so far. Right now it's 9.6 and it will be a latest stable
> available release on the
priority desc, times_failed) should speed this up massively.
Might want to include status at the end as well. However, your example data
is not terribly realistic.
What version of PostgreSQL are you using?
Cheers,
Jeff
take 4 seconds to transfer this much data over a UNIX
> socket on the same box?
>
It has to convert the data to a format used for the wire protocol (hardware
independent, and able to support user defined and composite types), and
then back again.
> work_mem = 100MB
Can you give it more than that? How many simultaneous connections do you
expect?
Cheers,
Jeff
server?
>
> As long as you are able to compile your own version of Postgres and
> your distribution does not allow that, there is nothing preventing you
> to do so.
>
But there is something preventing it. wal_segsize cannot exceed 64MB in
9.2. v10 will be the first version which will allow sizes above 64MB.
Cheers,
Jeff
that is
probably too small for what you are doing.
But if you are not using COPY, then maybe none of this matters as the
bottleneck will be elsewhere.
Cheers,
Jeff
better estimates there by doing something like:
a.mapping_code+0 = b.mapping_code+0 AND a.channel=b.channel
(or using ||'' rather than +0 if the types are textual rather than
numerical). But I doubt it would be enough of a difference to change the
plan, but it is an easy thing to try.
Cheers,
Jeff
On Tue, May 23, 2017 at 4:03 AM, Gunnar "Nick" Bluth <
gunnar.bluth.ext...@elster.de> wrote:
> Am 05/22/2017 um 09:57 PM schrieb Jeff Janes:
> >
> > create view view2 as select id,
> > (
> > select md5 from thing_alias where thing_id=id
select md5 from thing_alias where thing_id=id
order by priority desc limit 1
) as md5,
cutoff from thing;
Cheers,
Jeff
also being in pg_stat_statements, but
it is not a sure thing and there is some risk the name got freed and reused.
log_min_duration_statement has the same issue.
Cheers,
Jeff
On Mon, May 15, 2017 at 3:22 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Jeff Janes <jeff.ja...@gmail.com> writes:
> > I've tried versions 9.6.3 and 10dev, and neither do what I expected. It
> > doesn't seem to be a planning problem where it thinks the fast plan is
consider the faster plans as being options
at all. Is there some setting to make it realize the cast is shippable?
Is any of the work being done on postgres_fdw for V11 working towards
fixing this?
Cheers,
Jeff
drop database foobar;
create database foobar;
\c foobar
CREATE EXTENSION IF NOT EXIS
_and_object_type suggests that
you are not analyzing (and so probably also not vacuuming) often enough.
Cheers,
Jeff
On Mon, Mar 6, 2017 at 8:46 AM, twoflower <standa.ku...@gmail.com> wrote:
> Thank you Jeff.
>
> There are 7 million rows satisfying fk_id_client = 20045. There is an
> index on fk_id_client, now I added a composite (fk_id_client, id) index but
> that did not help.
>
n I help Postgres execute the query with *asc* ordering as fast as
> the one with *desc*?
>
You probably can't. Your data us well suited to one, and ill suited for
the other. You can probably make it faster than it currently is, but not
as fast as the DESC version.
Cheers,
Jeff
On Thu, Mar 2, 2017 at 1:19 PM, Sven R. Kunze <srku...@mail.de> wrote:
> On 01.03.2017 18:04, Jeff Janes wrote:
>
> On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <srku...@mail.de> wrote:
>
>> On 28.02.2017 17:49, Jeff Janes wrote:
>>
>> Oh. In my ha
is times faster than the
execution you found in the log file? Was the top output you showed in the
first email happening at the time the really slow query was running, or was
that from a different period?
Cheers,
Jeff
xt, that also gets poorly estimated. Also, if you make
both column of both tables be integers, same thing--you get bad estimates
when the join condition refers to one column and the where refers to the
other. I don't know why the estimate is poor, but it is not related to the
types of the columns, but rather the identities of them.
Cheers,
Jeff
On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <srku...@mail.de> wrote:
> On 28.02.2017 17:49, Jeff Janes wrote:
>
> Oh. In my hands, it works very well. I get 70 seconds to do the {age:
> 20} query from pure cold caches, versus 1.4 seconds from cold caches which
> was f
On Tue, Feb 28, 2017 at 12:27 AM, Sven R. Kunze <srku...@mail.de> wrote:
> On 27.02.2017 19:22, Jeff Janes wrote:
>
> If by 'permanently', you mean even when you intentionally break things,
> then no. You will always be able to intentionally break things. There is
>
ut, what is it? If you reboot the server
frequently, maybe you can just throw 'select pg_prewarm...' into an init
script?
Cheers,
Jeff
the REINDEX should not have been necessary, just the ALTER
EXTENSION UPDATE should do the trick. Rebuiding a large gin index can be
pretty slow.
Cheers,
Jeff
On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs <si...@2ndquadrant.com> wrote:
> On 23 January 2017 at 17:12, Jeff Janes <jeff.ja...@gmail.com> wrote:
>
> >> Just to make sure anyone reading the mailing list archives isn't
> >> confused, running pg_start_ba
and timestamp.
Which unfortunately obliterates much of the point of using rsync for many
people. You can still save on bandwidth, but not on local IO on each end.
Cheers,
Jeff
it is an essential step for people who run with full_page_writes=off, as it
ensures that anything in base which got changed mid-copy will be fixed up
during replay of the WAL.
Cheers,
Jeff
rows, but now all of them
get rejected upon the recheck.
You could try changing the type of index to jsonb_path_ops. In your given
example, it won't make a difference, because you are actually counting half
the table and so half the table needs to be rechecked. But in my example,
jsonb_path_ops successfully rejects all the rows at the index stage.
Cheers,
Jeff
vacuum_max_workers *
> maintenance_work_mem (roughly 48GB).
>
I don't think that this is the true cause of the problem. In current
versions of PostgreSQL, VACUUM cannot make use of more than 1GB of
process-local memory, even if maintenance_work_mem is set to a far greater
value.
Cheers,
Jeff
th t as
(select booking0_.*
from booking booking0_
where booking0_.customer_id in (
select customer1_.id
from customer customer1_
where lower((customer1_.first_name||'
'||customer1_.last_name)) like '%gatef%'
) select * from t order by booking0_.id desc limit 30;
Cheers,
Jeff
ns. Then ponder if it is safe to use that much
work_mem "for real" given your RAM and level of concurrent access.
Cheers,
Jeff
part of the
text[]).
The performance of these options will depend on both the nature of your
data and the nature of your queries.
Cheers,
Jeff
WAL
file sizes, or write a fancy archive_command which first archives the files
to a local directory, and then transfers them in chunks to the slave. Or
maybe use streaming rather than file shipping.
Cheers,
Jeff
rap around vacuums
kick in, then all future autovacuum workers are directed to that one
database, starving all other databases of auto-vacuuming. But that doesn't
sound like what you are describing.
Cheers,
Jeff
the same inefficient plan (providing the join collapse
limits, etc. don't come into play, which I don't think they do here) for
all the different ways of writing the query.
Since that is not happening, the planner must not be able to prove that the
different queries are semantically identical to each other, which means
that it can't pick the other plan no matter how good the estimates look.
Cheers,
Jeff
On Thu, Sep 29, 2016 at 11:48 AM, Sven R. Kunze <srku...@mail.de> wrote:
> On 29.09.2016 20:03, Jeff Janes wrote:
>
> Perhaps some future version of PostgreSQL could do so, but my gut feeling
> is that that is not very likely. It would take a lot of work, would risk
> bre
On Thu, Sep 22, 2016 at 11:35 PM, Sven R. Kunze <srku...@mail.de> wrote:
> Thanks a lot Madusudanan, Igor, Lutz and Jeff for your suggestions.
>
> What I can confirm is that the UNION ideas runs extremely fast (don't have
> access to the db right now to test the subquery idea, b
to be pretty small, and so it has no compunction about commanding that it
be read and written, in its entirety, quite often.
Cheers,
Jeff
t to my test. Takers?
>
Go through and put one row (or 8kB worth of rows) into each of 8 million
table. The stats collector and the autovacuum process will start going
nuts. Now, maybe you can deal with it. But maybe not. That is the first
non-obvious thing I'd look at.
Cheers,
Jeff
lain would be helpful.
> Are there indexes on the provided where clauses.
>
> Postgres can do a Bitmap heap scan to combine indexes, there is no need to
> fire two separate queries.
>
It can't combine bitmap scans that come from different tables.
But he can just combine the two queries into one, with a UNION.
Cheers,
Jeff
formance once the portion of the indexes which are being rapidly dirtied
exceeds shared_buffers + (some kernel specific factor related
to dirty_background_bytes and kin)
If you think this is the problem, you could try violating the conventional
wisdom by setting shared_buffers 80% to 90% of available RAM, rather than
20% to 25%.
Cheers,
Jeff
precomputing the aggregates and storing
them in a materialized view (not available in 9.2). Also, more RAM and
better hard-drives can't hurt.
Cheers,
Jeff
real world systems do not operate at the infinite
limit.
So his run time could easily be proportional to N^2, if he aggregates more
rows and each one of them is less likely to be a cache hit.
Cheers,
Jeff
On Thu, Aug 18, 2016 at 11:55 AM, Victor Yegorov <vyego...@gmail.com> wrote:
> 2016-08-18 18:59 GMT+03:00 Jeff Janes <jeff.ja...@gmail.com>:
>>
>> Both plans touch the same pages. The index scan just touches some of
>> those pages over and over again. A large s
uery.
Also, with a random_page_cost of 2.5, you are telling it that even
cold pages are not all that cold.
What are the correlations of the is_current column to the ctid order,
and of the loan_id column to the ctid order?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
t;t...@sss.pgh.pa.us>
Date: Wed Aug 26 18:18:57 2015 -0400
Speed up HeapTupleSatisfiesMVCC() by replacing the XID-in-progress test.
I am not entirely sure why this (as opposed to the previous-mentioned
4162a55c77cbb54) would fix a problem occurring during BIND, though.
Cheers,
Jeff
--
Successfully archived files are only removed by the checkpointer. The
logic is quite complex and it can be very frustrating trying to
predict exactly when any given file will get removed. You might want
to run a few manual checkpoints to see if that cleans it up. But turn
on log_checkpoints and reload the confi
t postgres, I made another C
> program, to open a file in writing, and for 1000 times : write 256 bytes and
> flush them (using fsync in linux and FlushFileBuffers in win).
> Win7: 200 write/sec
> Linux: 100 write/sec
Rather than rolling your own program, can you run pg_test_fsync on eac
On Wed, Jun 22, 2016 at 9:36 PM, Craig James <cja...@emolecules.com> wrote:
> On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>> You might be able to build a multiple column index on (smiles,
>> version_id) and have it do the right t
can possibly contain (according to the CHECK
constraints) the version_ids of interest in the query.
Also, if you tune your system using benzene, you will be probably
arrive at a place not optimal for more realistic queries.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@p
e savepoint
the start of the next suggests that your client, not the server, is
the dominant bottleneck.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
zczyxloxs874ad0+s-zm60u9bwcyiuzx9mhz-kc...@mail.gmail.com
I hope to give him some help if I get a chance.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
synonym_idx
(cost=0.00..16.10 rows=13 width=0) (actual time=8.847..8.847 rows=99
loops=1)
Index Cond: (synonym ~~ '%BAT%'::text)
Planning time: 18.261 ms
Execution time: 10.932 ms
So it is using the index for the positive match, and filtering those
results for the negative match, just
opposed to =ANY scan), and
the index lead column is correlated with the table ordering, then the
parts of the table that need to be visited will be much denser than if
there were no correlation. But Claudio is saying that this is not
being accounted for.
Cheers,
Jeff
--
Sent via pgsql-performanc
tension so that it had another
operator, say %%%, with a hard-coded cutoff which paid no attention to
the set_limit(). I'm not really sure how the planner would deal with
that, though.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
choose that but I'm reluctant to
> turn off nested loops in case the table gets a lot bigger.
A large hash join just needs to divide it up into batches. It should
still be faster than the nested loop (as currently implemented) ,
until you run out of temp space.
But, you already have a solution
= 905101312
> read(802, "c"..., 8192) = 8192
> lseek(801, 507863040, SEEK_SET) = 507863040
> read(801, "p"..., 8192) = 8192
> lseek(802, 914235392, SEEK_SET) = 914235392
> read(802, "c"..., 8192)
d they will have different
estimated selectivities which could easily tip the planner into making
a poor choice for the more selective case. Without seeing the plans,
it is hard to say much more.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
eptical this would do better than a full scan. It
would be interesting to test that.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
might be the way to go, although the construction
of the queries would then be more cumbersome, especially if you will
do by hand.
I think the only way to know for sure is to write a few scripts to benchmark it.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postg
On Tue, Mar 22, 2016 at 9:41 AM, Oleg Bartunov <obartu...@gmail.com> wrote:
>
>
> On Sat, Mar 19, 2016 at 5:44 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
>>
>>
>> I don't see why it would not be possible to create a new execution node
>> type tha
age_cost to less than seq_page_cost is nonsensical.
You could try to increase cpu_tuple_cost to 0.015 or 0.02
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
ble to create a new execution node
type that does an index scan to obtain order (or just to satisfy an
equality or range expression), and takes a bitmap (as produced by the
FTS/GIN) to apply as a filter. But, I don't know of anyone planning on
doing that.
Cheers,
Jeff
ike to take a step
back from that, and tell you that the reason that PostgreSQL is not
using more memory, is that it doesn't think that using more memory
would help.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
L is going to written sequentially with few fsyncs.
That is ideal for HDD. Even if you also have smaller transactions,
WAL is still sequentially written as long as you have a non-volatile
cache on your RAID controller which can absorb fsyncs efficiently.
Cheers,
Jeff
--
Sent via pgsql-perf
?
> Cause in theory, if I gave it a id>100 LIMIT 100, it might just as well
> return me results 150 to 250, instead of 100 to 200...
Can you use a method that maintains state (cursor with fetching, or
temporary storage) so that it doesn't have to recalculate the query
for each page?
Cheers,
ly work in parallel ?
I don't understand the question. What are the two tasks you are
referring to? Do you have multiple COPY running at the same time in
different processes?
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
d the sql file you feed to
it (and whatever is needed to set up the schema)?
Thanks,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Dec 10, 2015 at 8:38 PM, ankur_adwyze wrote:
> Hi Folks,
>
> I am a newbie to this mailing list. Tried searching the forum but didn't
> find something similar to the problem I am facing.
>
> Background:
> I have a Rails app with Postgres db. For certain reports, I have
, but not from the total query selectivity.
> Or is it just likely that the selection of the new index is just by chance?
Bingo.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
UNIQUE, btree (show, type, best, block, flag, "row", seat,
> recnum) // (2908 MB)
Why does the index seats_index02 exist in the first place? It looks
like an index designed for the benefit of a single query. In which
case, could flag column be moved up front? That should prevent it
fro
atches: 1 Memory Usage: 207874kB"
A lot of that time was probably spent reading the data off of disk so
that it could hash it.
You should turn track_io_timing on, run "explain (analyze, buffers)
..." and then show the entire explain output, or at least also show
the entries downstr
tor token at a time, you could unnest ts_vector
and store it in a table like (ts_token text, id_iu bigint). Then
build a regular btree index on (ts_token, id_iu) and get index-only
scans (once you upgrade from 9.1 to something newer)
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pg
more
memory, or some way to keep your memory pinned with what you need. If
you are on a RAID, you could also increase effective_io_concurrency,
which lets the bitmap scan prefetch table blocks.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
you would
probably have to rewrite it into C. But that would be a drag, and I
would try just throwing more CPU at it first.
Cheers,
Jeff
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
s not ship with any 'sum' function which takes array arguments.
> select sum('{1,2,3,4,5,6}'::int[]);
ERROR: function sum(integer[]) does not exist
Are you using a user defined function? If so, how did you define it?
Cheers,
Jeff
On Mon, Oct 12, 2015 at 11:17 AM, Shaun Thomas wrote:
> Hi guys,
>
> I've been doing some design investigation and ran into an interesting snag
> I didn't expect to find on 9.4 (and earlier). I wrote a quick python script
> to fork multiple simultaneous COPY commands to
n a lot of things like the size of the
index, the size of ram and shared_buffers, the number of spindles in your
RAID, the amount of parallelization in your insert/update activity, and the
distribution of "keys" among the data you are inserting/updating.
Cheers,
Jeff
,
Jeff
On Mon, Aug 24, 2015 at 8:18 AM, Guo, Yun y...@cvent.com wrote:
From: Jeff Janes jeff.ja...@gmail.com
Date: Friday, August 21, 2015 at 10:44 PM
To: Yun y...@cvent.com
Subject: Re: [PERFORM] query not using GIN index
On Fri, Aug 21, 2015 at 6:55 PM, Guo, Yun y...@cvent.com wrote:
Hi
On Fri, Aug 14, 2015 at 9:54 AM, Jeff Janes jeff.ja...@gmail.com wrote:
On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus j...@agliodbs.com wrote:
On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the
Once the commit of the whole-table update has replayed, the problem
should go way
On Fri, Aug 14, 2015 at 9:34 AM, Josh Berkus j...@agliodbs.com wrote:
On 08/13/2015 01:59 PM, Jeff Janes wrote: execute on the
Once the commit of the whole-table update has replayed, the problem
should go way instantly because at that point each backend doing the
seqscan will find
contended lock? I
don't know how hard WAL replay hits the proc array lock.
Cheers,
Jeff
of PostgreSQL.
Cheers,
Jeff
On Fri, Jul 24, 2015 at 2:40 PM, Laurent Debacker deback...@gmail.com
wrote:
The Recheck Cond line is a plan-time piece of info, not a run-time piece.
It only tells you what condition is going to be rechecked if a recheck is
found to be necessary.
Thanks Jeff! That makes sense indeed.
I'm
. Presumably that number was zero.
Cheers,
Jeff
in a pattern around the
beginning and end of a checkpoint.
I'd also set up vmstat to run continuously capturing output to a logfile
with a timestamp, which can later be correlated to the postgres log file
entries.
Cheers,
Jeff
in the query plan in order to figure out why it is doing
that.
It seems like that the tval_char IN ('DP') part of the restriction is
very selective, while the other two restrictions are not.
Cheers,
Jeff
idx_scan is incremented each time the planner check if an
index could be use whenever it won't use it ?
Not in general, only in a few peculiar cases.
Cheers,
Jeff
to improve this query?
What indexes do the tables have? What is the output of EXPLAIN, or better
yet EXPLAIN (ANALYZE,BUFFERS), for the query?
Cheers,
Jeff
On Tue, Apr 14, 2015 at 8:41 AM, Yves Dorfsman y...@zioup.com wrote:
On 2015-04-13 17:49, Jeff Janes wrote:
One way would be to lock dirty buffers from unlogged relations into
shared_buffers (which hardly seems like a good thing) until the start of
a
super-checkpoint and then write them
1 - 100 of 801 matches
Mail list logo