On Thu, Jun 20, 2013 at 10:14 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
On Thu, Jun 20, 2013 at 9:13 PM, bricklen brick...@gmail.com wrote:
On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
SELECT
DISTINCT ON (type) ts, type, details
FROM
observations
On Fri, Jun 21, 2013 at 9:08 AM, bricklen brick...@gmail.com wrote:
Did you try an index on (type, ts desc) ? I don't have much else to add at
this point, but maybe after posting some more server and table (parent and
child) details someone will have an answer for you.
No, this is exactly
I'm trying to optimize a query on a partitioned table. The schema looks
like this:
CREATE TABLE observations(
ts timestamptz NOT NULL DEFAULT now(),
type text NOT NULL,
subject uuid NOT NULL,
details json NOT NULL
);
The table is partitioned by ts (right now I have ~300 1h partitions,
On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
SELECT
DISTINCT ON (type) ts, type, details
FROM
observations
WHERE
subject = '...'
ORDER BY
type, ts DESC;
First thing: What is your work_mem set to, and how much RAM is in the
machine? If you look at
On Thu, Jun 20, 2013 at 9:13 PM, bricklen brick...@gmail.com wrote:
On Thu, Jun 20, 2013 at 6:24 PM, Maciek Sakrejda m.sakre...@gmail.comwrote:
SELECT
DISTINCT ON (type) ts, type, details
FROM
observations
WHERE
subject = '...'
ORDER BY
type, ts DESC;
First thing: What is
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
a.status, a.customer_location_id, a.added_date,
o.agent_id,
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBA cs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id,
On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's the query:
SELECT
a.account_id, a.customer_id, a.order_id, a.primary_contact_id,
On 10/11/2011 12:02 PM, Pavel Stehule wrote:
Hello
please, send EXPLAIN ANALYZE output instead.
Regards
Pavel Stehule
2011/10/11 CS DBAcs_...@consistentstate.com:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Here's
On 10/11/2011 12:03 PM, Szymon Guz wrote:
On 11 October 2011 19:52, CS DBA cs_...@consistentstate.com
mailto:cs_...@consistentstate.com wrote:
Hi all ;
I'm trying to tune a difficult query.
I have 2 tables:
cust_acct (9million rows)
cust_orders (200,000 rows)
Hash Join (cost=154.46..691776.11 rows=10059626 width=100) (actual
time=5.191..37551.360 rows=10063432 loops=1)
Hash Cond: (a.order_id =
o.order_id)
- Seq Scan on cust_acct a (cost=0.00..540727.26 rows=10059626
width=92) (actual time=0.022..18987.095 rows=10063432
loops=1)
-
Can any one please help me in tuning the query?
explain
select *
from (select * from crmentity where deleted = 0 and createdtime between
(now() - interval '6 month') and now() ) as crmentity
inner join (select * from activity where activitytype = 'Emails' and
date_start between (now() - interval
On 06/14/2010 05:41 AM, AI Rumman wrote:
Can any one please help me in tuning the query?
explain
select *
from (select * from crmentity where deleted = 0 and createdtime between
(now() - interval '6 month') and now() ) as crmentity
inner join (select * from activity where activitytype =
2009/8/19 Kevin Kempter kev...@consistentstate.com
We do have an index on url_hits.time
not sure why timestamps were not used, I was not here for the design phase.
What's type of time column? I don't like it casts it to double in explain.
If it is integer, may be you need to change
and
Hi all;
we've been fighting this query for a few days now. we bumped up the statistict
target for the a.id , c.url_hits_id and the b.id columns below to 250 and ran
an analyze on the relevant tables. we killed it after 8hrs.
Note the url_hits table has 1.4billion rows
Any suggestions?
that seems to be the killer:
and time = extract ('epoch' from timestamp '2009-08-12')
and time extract ('epoch' from timestamp '2009-08-13' )
You probably need an index on time/epoch:
CREATE INDEX foo ON table(extract ('epoch' from timestamp time );
or something like that, vacuum analyze and
2009/8/19 Grzegorz Jaśkiewicz gryz...@gmail.com
that seems to be the killer:
and time = extract ('epoch' from timestamp '2009-08-12')
and time extract ('epoch' from timestamp '2009-08-13' )
You probably need an index on time/epoch:
CREATE INDEX foo ON table(extract ('epoch' from
On Wednesday 19 August 2009 11:17:26 Scott Carey wrote:
On 8/19/09 9:28 AM, Kevin Kempter kev...@consistentstate.com wrote:
Hi all;
we've been fighting this query for a few days now. we bumped up the
statistict target for the a.id , c.url_hits_id and the b.id columns below
to 250 and
On Wednesday 19 August 2009 11:31:30 Nikolas Everett wrote:
2009/8/19 Grzegorz Jaśkiewicz gryz...@gmail.com
that seems to be the killer:
and time = extract ('epoch' from timestamp '2009-08-12')
and time extract ('epoch' from timestamp '2009-08-13' )
You probably need an index on
@postgresql.org
Subject: Re: [PERFORM] Query tuning
Subbiah, Stalin wrote:
Actually these servers will be upgraded to 8.1.4 in couple of months.
even so, you could get some bad data in there.
http://www.postgresql.org/docs/8.0/static/release.html . Go through the
old release notes and you'll find various race
:[EMAIL PROTECTED] On Behalf Of
Subbiah, Stalin
Sent: Wednesday, August 23, 2006 1:03 PM
To: Chris
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query tuning
I get the same plan after running vacuum analyze. Nope, I don't have
index on objdomainid, objid and userdomainid
-performance@postgresql.org
Subject: RE: [PERFORM] Query tuning
It seems to me that what would work best is an index scan backward on
the eventtime index. I don't see why that wouldn't work for you, maybe
the planner is just esitmating the seq scan and sort is faster for some
reason.
What does EXPLAIN
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current day uses index range
Subbiah, Stalin wrote:
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current
) OR ((userdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
Total runtime: 437884.134 ms
(6 rows)
-Original Message-
From: Chris [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 22, 2006 6:37 PM
To: Subbiah, Stalin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query
Subbiah, Stalin wrote:
Actually these servers will be upgraded to 8.1.4 in couple of months.
even so, you could get some bad data in there.
http://www.postgresql.org/docs/8.0/static/release.html . Go through the
old release notes and you'll find various race conditions, crashes etc.
Here
Hi Dan,
I tried to understand your query, but I couldn't get my understanding of
the query and your description in sync.
Why do you use sub selects? Wouldn't a simple recordtext like '%RED%'
do the trick too?
You combine all your where conditions with and. To me this looks like
you get only
In article [EMAIL PROTECTED],
Dan Harris [EMAIL PROTECTED] writes:
On May 8, 2005, at 8:06 PM, Josh Berkus wrote:
If I were to use tsearch2 for full-text indexing, would I need to
create another table that merges all of my recordtext rows into a
single 'text' field type?
No. Read the
Quoting Russell Smith [EMAIL PROTECTED]:
On Mon, 9 May 2005 11:49 am, Dan Harris wrote:
On May 8, 2005, at 6:51 PM, Russell Smith wrote:
[snip]
select distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat
FROM em
JOIN ea ON em.incidentid = ea.incidentid --- slight
Sorry to bother everyone with yet another my query isn't using an
index problem but I am over my head on this one.. I am open to ways
of restructuring this query to perform better.
I have a table, 'ea', with 22 million rows in it. VACUUM ANALYZE has
been just run on the table.
This is the
On Mon, 9 May 2005 09:20 am, Dan Harris wrote:
Sorry to bother everyone with yet another my query isn't using an
index problem but I am over my head on this one.. I am open to ways
of restructuring this query to perform better.
I have a table, 'ea', with 22 million rows in it. VACUUM
Dan,
and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' )
It is simply not possible to use B-tree indexes on these kind of text queries.
B-trees require you to start at the left side of the field, because B-trees
locate records via tests. Anywhere in the field text
Russell Smith [EMAIL PROTECTED] writes:
On Mon, 9 May 2005 09:20 am, Dan Harris wrote:
and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' )
You cannot use an index for %CORVETTE%, or %RED%.
Not a btree index anyway. Dan might have some success here with a
full-text-indexing
On May 8, 2005, at 6:51 PM, Russell Smith wrote:
On Mon, 9 May 2005 09:20 am, Dan Harris wrote:
You cannot use an index for %CORVETTE%, or %RED%. There is no way
for the index to know if a row had that in the middle without scanning
the whole
index. So it's much cheaper to do a sequence scan.
Dan,
While I believe you, I'm confused by this line in my original EXPLAIN
ANALYZE:
- Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47)
(actual time=2.085..2.309 rows=2 loops=473)
Index Cond:
((ea.incidentid)::text =
Dan Harris [EMAIL PROTECTED] writes:
- Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47)
(actual time=2.085..2.309 rows=2 loops=473)
Index Cond: ((ea.incidentid)::text = (outer.incidentid)::text)
Filter: (((recordtext)::text ~~ '%RED%'::text) OR
On May 8, 2005, at 8:06 PM, Josh Berkus wrote:
If I were to use tsearch2 for full-text indexing, would I need to
create another table that merges all of my recordtext rows into a
single 'text' field type?
No. Read the OpenFTS docs, they are fairly clear on how to set up a
simple
FTS index.
On Mon, 9 May 2005 11:49 am, Dan Harris wrote:
On May 8, 2005, at 6:51 PM, Russell Smith wrote:
[snip]
select distinct em.incidentid, ea.recordtext as retdata, eg.long,
eg.lat
FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate =
'2005-1-1 00:00'
AND em.entrydate
On May 8, 2005, at 8:32 PM, Russell Smith wrote:
I have run this, and while it is very fast, I'm concerned it's not
doing what I need.
How fast is very fast?
It took 35 seconds to complete versus ~450 my old way.
select distinct em.incidentid, ea.recordtext as retdata, eg.long,
eg.lat
FROM em
On Sun, 8 May 2005 20:31:38 -0600, Dan Harris [EMAIL PROTECTED] wrote:
Duly noted. If this method can search across rows, I'm willing to
accept this overhead for the speed it would add.
You could use intersect to search across rows. Using tsearch2 will look
up the RED and CORVETTE using the
Hi,
I have a problem with the below query, when i do explain on the
below query on my live database it doesnt use any index specified on the
tables and does seq scan on the table which is 400k records. But if i
copy the same table onto a different database on a different machine it
uses
Tom Lane wrote:
Pallav Kalva [EMAIL PROTECTED] writes:
I have a problem with the below query, when i do explain on the
below query on my live database it doesnt use any index specified on the
tables and does seq scan on the table which is 400k records. But if i
copy the same table onto
42 matches
Mail list logo