Richard,
What do you mean by summary table? Basically a cache of the query into a
table with replicated column names of all the joins? I'd probably have to
whipe out the table every minute and re-insert the data for each carrier in
the system. I'm not sure how expensive this operation would
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure how expensive this
Ken Egervari wrote:
Josh,
...
I thought about this, but it's very important since shipment and
shipment_status are both updated in real time 24/7/365. I think I
might be able to cache it within the application for 60 seconds at
most, but it would make little difference since people tend to
Ken,
I did everything you said and my query does perform a bit better. I've
been getting speeds from 203 to 219 to 234 milliseconds now. I tried
increasing the work mem and the effective cache size from the values you
provided, but I didn't see any more improvement. I've tried to looking
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure
2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;
Actually, it was 312 milliseconds, so it got worse.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Ken,
Well, I'm a bit stumped on troubleshooting the actual query since Windows'
poor time resolution makes it impossible to trust the actual execution times.
Obviously this is something we need to look into for the Win32 port for
8.1 ..
shared_buffers = 1000
This may be slowing up that
Josh,
I did everything you said and my query does perform a bit better. I've been
getting speeds from 203 to 219 to 234 milliseconds now. I tried increasing
the work mem and the effective cache size from the values you provided, but
I didn't see any more improvement. I've tried to looking
Ken Egervari wrote:
Josh,
I did everything you said and my query does perform a bit better.
I've been getting speeds from 203 to 219 to 234 milliseconds now. I
tried increasing the work mem and the effective cache size from the
values you provided, but I didn't see any more improvement. I've
Josh,
Thanks so much for your comments. They are incredibly insightful and you
clearly know your stuff. It's so great that I'm able to learn so much from
you. I really appreciate it.
Do you need the interior sort? It's taking ~93ms to get 7k rows from
shipment_status, and then another 30ms
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
and created all the right
indexes from his diagraming method, but the query still performs
quite slow both inside the application and just inside pgadmin III.
Can anyone be kind enough to help me tune it so that it performs
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
Dan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book
SQL Tuning is an excellent resource.
--
___
This
Bricklen Anderson wrote:
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
That URL is invalid for me.
--
Bruce Momjian| http://candle.pha.pa.us
pgman@candle.pha.pa.us
First, what version of postgres, and have you run VACUUM ANALYZE recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I won't
be at work for a few days. I was
Ken Egervari wrote:
First, what version of postgres, and have you run VACUUM ANALYZE
recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I
won't be at work for a
On Wed, 2005-03-02 at 01:51 -0500, Ken Egervari wrote:
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id =
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id = ctp.person_id
inner join shipment_status cs on
On Wed, 2005-03-02 at 13:28 -0500, Ken Egervari wrote:
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id =
left join is for eager loading so that I don't have to run a seperate
query
to fetch the children for each shipment. This really does improve
performance because otherwise you'll have to make N+1 queries to the
database, and that's just too much overhead.
are you saying that you are actually
it might help the planner estimate better the number of cs rows
affected. whether this improves performance depends on whether
the best plans are sensitive to this.
I managed to try this and see if it did anything. Unfortunately, it made no
difference. It's still 250 milliseconds. It was a
Ken Egervari [EMAIL PROTECTED] writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
My machine is WinXP professional, athon xp 2100, but I get
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference conceptually
since it drives on the person table quickly. But to my surprise, the query
runs at about 375 milliseconds. I think it's because it's going over that
Ken Egervari wrote:
I took John's advice and tried to work with sub-selects. I tried
this variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's
Ken,
I've tried to use Dan Tow's tuning method and created all the right indexes
from his diagraming method, but the query still performs quite slow both
inside the application and just inside pgadmin III. Can anyone be kind
enough to help me tune it so that it performs better in postgres? I
Ken,
- Merge Join (cost=602.54..1882.73 rows=870 width=91) (actual
time=234.000..312.000 rows=310 loops=1)
Merge Cond: (outer.current_status_id = inner.id)
Hmmm ... this merge join appears to be the majority of your execution
time at least within the resolution
Josh,
1) To determine your query order ala Dan Tow and drive off of person,
please
SET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the
query. (Not that I believe in Dan Tow ... see previous message ... but it
would be interesting to see the results.
Unfortunately, the query
26 matches
Mail list logo