Hello,
The distribution of the data is that all but 120,000 rows have null
values in the _revision_expired column.
A shot in the dark - will a partial index on the above column help?
http://www.postgresql.org/docs/current/interactive/indexes-partial.html
Which is the type of your application? You can see it on the Performance
Whackamole Presentation from Josh Berkus on the
PgCon 2009:
- Web application
- Online Transaction Processing (OLTP)
- Data WareHousing (DW)
And based on the type of your application, you can configure the
postgresql.conf
Hello,
just coming from this thread :
http://archives.postgresql.org/pgsql-admin/2011-01/msg00050.php
It started as an admin question and turned out to be a performance question.
You may look at it for a history of this issue. I will repost all data here.
Description of the machines involved:
Query is :
SELECT distinct m.id,coalesce(m.givenname,''),
coalesce(m.midname,''),
m.surname from marinerstates ms,vessels vsl,mariner m
WHERE m.id=ms.marinerid and ms.vslid=vsl.id
ANDms.state='Active' and coalesce(ms.endtime,now())::date = '2006-07-15'
AND
Achilleas Mantzios wrote:
From the whole set of the tests involved, it seems like the NOT IN version of
the query runs slow
in any postgresql 9.0.2 tested.
Not only that, it will run slower even using Oracle 11.2 or MySQL 5.5.
--
Mladen Gogala
Sr. Oracle DBA
1500 Broadway
New York, NY
Jeremy Palmer wrote:
WHERE (
(_revision_created = 16
AND _revision_expired 16
AND _revision_expired = 40)
OR (_revision_created 16
AND _revision_created = 40))
- Bitmap Heap Scan on version_crs_coordinate_revision
(actual time=70.925..13531.720
It`s just a sample.
select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)
Nested Loop (cost=0.00..786642.96 rows=1 width=4) (actual
time=91021.167..119601.344 rows=1 loops=1)
Join Filter: ((a.id)::integer = asinteger((c.id)::integer))
- Seq Scan on onerow c (cost=0.00..1.01
2011/1/17 Zotov zo...@oe-it.ru:
It`s just a sample.
select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)
Nested Loop (cost=0.00..786642.96 rows=1 width=4) (actual
time=91021.167..119601.344 rows=1 loops=1)
Join Filter: ((a.id)::integer = asinteger((c.id)::integer))
- Seq
Jeremy Palmer jpal...@linz.govt.nz writes:
I've come to a dead end in trying to get a commonly used query to
perform better.
EXPLAIN
SELECT * FROM (
SELECT
row_number() OVER (PARTITION BY id ORDER BY _revision_created DESC)
as row_number,
*
FROM
Zotov wrote:
select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)
Why SeqScan???
Because you don't have an index on AsInteger(c.id).
If your function is IMMUTABLE (each possible combination of input
values always yields the same result), and you declare it such, then
you
It fits a Data Warehousing type application.
Apart from work_mem, my other parameters are pretty close to these numbers. I
had the work_mem down a little because a noticed some clients were getting out
of memory errors with large queries which involved lots of sorting.
Thanks
Jeremy
Thanks that seems to make the query 10-15% faster :)
Cheers
jeremy
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Tuesday, 18 January 2011 9:24 a.m.
To: Jeremy Palmer
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Possible to improve query plan?
2011/1/17 Kevin Grittner kevin.gritt...@wicourts.gov:
Zotov wrote:
select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)
Why SeqScan???
Because you don't have an index on AsInteger(c.id).
If your function is IMMUTABLE (each possible combination of input
values always yields
Pavel Stehule pavel.steh...@gmail.com writes:
it should to work without functional index - but not sure about effectivity
As long as the function is VOLATILE, the planner can't use any
intelligent query plan. Merge or hash join both require at least
stable join keys.
2011/1/17 Tom Lane t...@sss.pgh.pa.us:
Pavel Stehule pavel.steh...@gmail.com writes:
it should to work without functional index - but not sure about effectivity
As long as the function is VOLATILE, the planner can't use any
intelligent query plan. Merge or hash join both require at least
Tom Lane wrote:
If I'm not mistaken, that's a DB2-ish locution
It could also be a part of the Oracle vernacular. I've seen queries like
that running against Oracle RDBMS, too.
for a query with DISTINCT
ON, ie, you're looking for the row with highest _revision_created for
each value of id.
16 matches
Mail list logo