I have the following queries:
EXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)
select[…]
from f_calc_service a11,
d_patient_typea12
where a11.d_patient_pop_id in (336)
and
2016-03-16 21:23 GMT+01:00 Doiron, Daniel :
> I have the following queries:
>
> EXPLAIN (ANALYZE, VERBOSE, COSTS, BUFFERS, TIMING)
> select[…]
> from f_calc_service a11,
> d_patient_typea12
>
Hi All,
I have been looking for a solution to a problem where my query is executing for
a long time because it is running into a nested loop problem.
I have done explain analyze and it shows the query taking a very long time due
to nested loops.
On the DB side, there are indices in place for
REPLACE -- where sos_stg_aggr.tid_stg in (select distinct lehr_stg_ab_tid from
lehr_stg_ab2fb)
WITH -- where sos_stg_aggr.tid_stg EXISTS (select distinct lehr_stg_ab_tid from
lehr_stg_ab2fb)
Similarly others also like -- lehr_stg_ab.tid not in (select lehr_stg_ab_tid
from lehr_stg_ab2fb) with
On Fri, Jan 25, 2013 at 7:34 AM, alexandre - aldeia digital
adald...@gmail.com wrote:
Hi,
Last weekend, we upgrade a PG from 8.4 to 9.2 version (full
pg_dump/restore/vacuum/analyze). After this, some simple join querys
became very slow, maybe because the use of nested loops. Bellow, an
Em 25-01-2013 16:29, Jeff Janes escreveu:
On Fri, Jan 25, 2013 at 7:34 AM, alexandre - aldeia digital
adald...@gmail.com wrote:
Hi,
Last weekend, we upgrade a PG from 8.4 to 9.2 version (full
pg_dump/restore/vacuum/analyze). After this, some simple join querys
became very slow, maybe because
Hello,
Le 26/07/09 7:09, Greg Caulton a écrit :
On Sun, Jul 26, 2009 at 1:02 AM, Greg Caulton caulton...@gmail.com
mailto:caulton...@gmail.com wrote:
Hello,
It seems to me that the following query should be a lot faster.
This runs in 17 seconds (regardless how many times I
Hello,
It seems to me that the following query should be a lot faster. This runs
in 17 seconds (regardless how many times I run it)
select ac.* from application_controls_view ac, refs r where
ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';
if I do not use the view the query
On Sun, Jul 26, 2009 at 1:02 AM, Greg Caulton caulton...@gmail.com wrote:
Hello,
It seems to me that the following query should be a lot faster. This runs
in 17 seconds (regardless how many times I run it)
select ac.* from application_controls_view ac, refs r where
Dave North dno...@signiant.com writes:
The outstanding question here is why does the explain analyze take
(quite a bit) longer than just executing the query?
EXPLAIN ANALYZE has nontrivial measurement overhead, especially on
platforms with slow gettimeofday(). Old/cheap PC hardware, in
Dave,
Is there further optimizations we can do to change the plan?
Is this perhaps addressed in a later release?
Given the left joins, a later release might help; I know we did a lot to
improve left join plans in 8.3. It would be worth testing if you can
test an upgrade easily.
--
Josh
On Wed, Jun 24, 2009 at 1:43 PM, Dave Northdno...@signiant.com wrote:
Essentially, we're seeing a query plan that is taking 95 secs with a nested
loop execution plan and 1 sec with a merge join plan. We've tried
increasing the default_statistics_target to 1000 and re-analyzed but the
same
On 06/25/2009 04:36 PM, Greg Stark wrote:
AND
web_user_property_directory_outbox.prop_key like
'location_node_directory_outbox'
Why use like for a constant string with no % or _ characters? If you
used = the planner might be able to come up with a better estimate
Any reason why
Greg Stark gsst...@mit.edu writes:
On Wed, Jun 24, 2009 at 1:43 PM, Dave Northdno...@signiant.com wrote:
Why use like for a constant string with no % or _ characters? If you
used = the planner might be able to come up with a better estimate.
Uh, it appears to me the string *does* contain _
On Thu, Jun 25, 2009 at 10:05 PM, Tom Lanet...@sss.pgh.pa.us wrote:
Uh, it appears to me the string *does* contain _ characters; perhaps the
OP has neglected to escape those?
Sigh. Indeed.
--
greg
http://mit.edu/~gsstark/resume.pdf
--
Sent via pgsql-performance mailing list
Morning all,
A colleague here tried to post this yesterday but it was stalled for
some reason. Anyway, here's what we're seeing which hopefully someone
has some pointers for.
Essentially, we're seeing a query plan that is taking 95 secs with a
nested loop execution plan and 1 sec with a
Of Brad Ediger
Sent: 22 August 2008 16:26
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Nested Loop join being improperly chosen
Hello,
I'm having trouble with a Nested Loop being selected for a rather
complex query; it turns out this is a pretty bad plan as the nested
loop's row estimates
Hello,
I'm having trouble with a Nested Loop being selected for a rather
complex query; it turns out this is a pretty bad plan as the nested
loop's row estimates are quite off (1 estimated / 1207881 actual). If
I disable enable_nestloop, the query executes much faster (42 seconds
instead
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
PostgreSQL 8.3, default configuration
There are 2 tables (structure was simplified to show only problematic
place):
Vlad Arkhipov [EMAIL PROTECTED] writes:
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...
It looks like you are wishing to optimize for all-in-memory situations,
in
Hi,
here is the query
SELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS campname,
rc.rev_type AS revtype, rc.act_type AS actntype, ra.id AS advid, ra.name AS
advname, rpt_chn.id AS chanid, rpt_chn.name AS channame, rpt_cre.dn AS dn,
SUM(rm.imdel) AS impression, SUM(rm.cdel) AS
On þri, 2007-03-27 at 16:13 +0530, Gauri Kanekar wrote:
SELECT rs.id AS sid, rs.name AS sname, rc.id AS campid, rc.name AS
campname, rc.rev_type AS revtype, rc.act_type AS actntype, ra.id AS
advid, ra.name AS advname, rpt_chn.id AS chanid, rpt_chn.name AS
channame, rpt_cre.dn AS dn,
Hi List,
how to speedup nested loop queries and by which parameters.
--
Regards
Gauri
On Mon, Mar 26, 2007 at 05:34:39PM +0530, Gauri Kanekar wrote:
how to speedup nested loop queries and by which parameters.
Please post a query you're trying to tune and the EXPLAIN ANALYZE
output, as well as any changes you've already made in postgresql.conf
or configuration variables you've set
Sorry,
this are the Confg Setting
max_connections = 100 # (change requires restart)
shared_buffers = 300MB
work_mem = 256MB
max_fsm_pages = 40
max_fsm_relations = 500
wal_buffers = 512
checkpoint_segments = 20
checkpoint_timeout = 900
enable_bitmapscan = on
enable_seqscan =
-Original Message-
From: [EMAIL PROTECTED] On Behalf Of Gauri Kanekar
Subject: Re: [PERFORM] Nested Loop
join_collapse_limit = 1 # JOINs
Is there a reason you have this set to 1? Postgres can't consider multiple
join orders when you do that. I would try setting
On mán, 2007-03-26 at 20:33 +0530, Gauri Kanekar wrote:
you did not show your query, nor did you answer whather you had vacuumed
and analyzed.
enable_seqscan = off
why this? this is unlikely to help
QUERY PLAN
...
- Nested Loop
(cost=0.00..1104714.83
On 5/2/06, Tom Lane [EMAIL PROTECTED] wrote:
Ian Burrell [EMAIL PROTECTED] writes:
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop join because it estimates there will be only be a single row.
There are really thousands
Ian Burrell [EMAIL PROTECTED] writes:
We recently upgraded to PostgreSQL 8.1 from 7.4 and a few queries are
having performance problems and running for very long times. The
commonality seems to be PostgreSQL 8.1 is choosing to use a nested
loop join because it estimates there will be only be
On Sat, 2005-09-17 at 17:47 +0200, Antoine Bajolet wrote:
There are more tables around, but the heart of the search engine is
made of three tables :
fiches (f_id int4, f_title varchar) 52445 rows
engine (f_id int4, k_id int4, weight )11761700 rows
keywords(k_id, keyword)
Antoine Bajolet [EMAIL PROTECTED] writes:
We are using postgresql in a search engine on an intranet handling
throusand of documents.
But we ave a big problem when users use more than two search key.
I think you need to increase the statistics targets for your keywords
table --- the estimates
Hello,
Tom Lane a écrit :
Antoine Bajolet [EMAIL PROTECTED] writes:
We are using postgresql in a search engine on an intranet handling
throusand of documents.
But we ave a big problem when users use more than two search key.
I think you need to increase the statistics targets for
Re,
With modifing parameters like this :
ALTER TABLE keywords ALTER keyword SET STATISTICS 100;
ALTER TABLE keywords ALTER k_id SET STATISTICS 100;
ALTER TABLE engine ALTER k_id SET STATISTICS 100;
ALTER TABLE engine ALTER f_id SET STATISTICS 100;
vacuuming both tables
and rewriting the
Hello,
We are using postgresql in a search engine on an intranet handling
throusand of documents.
But we ave a big problem when users use more than two search key.
There are more tables around, but the heart of the search engine is
made of three tables :
fiches (f_id int4, f_title
On Tuesday 16 December 2003 17:06, Nick Fankhauser - Doxpop wrote:
Hi-
I'm trying to optimize a query that I *think* should run very fast.
Essentially, I'm joining two tables that have very selective indexes and
constraining the query on an indexed field. (There's a third small lookup
table
It seems that your basic problem is that you're fetching lots of rows
from two big ol' tables.
It doesn't seem to me that there would be a substantially better plan
for this query with your tables as they stand.
That's more or less the conclusion I had come to. I was just hoping someone
else
Hi-
I'm trying to optimize a query that I *think* should run very fast.
Essentially, I'm joining two tables that have very selective indexes and
constraining the query on an indexed field. (There's a third small lookup
table in the mix, but it doesn't really affect the bottom line.)
actor is a
On Tue, 16 Dec 2003, Nick Fankhauser wrote:
Is there a more efficient means than a nested loop to handle such a join?
Would a different method be chosen if there was exactly one row in
actor_summary for every row in actor?
As a question, what does explain analyze give you if you
set
39 matches
Mail list logo