On Tue, Mar 01, 2005 at 10:46:02PM +0700, [EMAIL PROTECTED] wrote:
> I would like to know whether there is any command which the server will give
> the
> record ID back to the client when client puts the data and the server
> generates
> an autoincrement ID for that record.
See "How do I get th
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
> and created all the right
indexes from his diagraming method, but the query still performs
quite slow both inside the application and just inside pgadmin III.
Can anyone be kind enough to help me tune it so that it performs
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
Dan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book
"SQL Tuning" is an excellent resource.
--
___
This
Bricklen Anderson wrote:
> Richard Huxton wrote:
> > Ken Egervari wrote:
> >
> >> I've tried to use Dan Tow's tuning method
> > Who? What?
>
> http://www.singingsql.com/
That URL is invalid for me.
--
Bruce Momjian| http://candle.pha.pa.us
pgman@candle.pha.pa.us
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method and created all the right
indexes from his diagraming method, but the query still performs quite
slow both inside the application and just inside pgadmin III. Can
anyone be kind enough to help me tune it so that it performs better in
po
First, what version of postgres, and have you run VACUUM ANALYZE recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I won't
be at work for a few days. I was hopi
Ken Egervari wrote:
First, what version of postgres, and have you run VACUUM ANALYZE
recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I
won't be at work for a f
Hi All,
I am wondering about the relative performance of "insert into table1 select
distinct a,b from ..." and "insert into table1 select a,b from ... group by
a,b" when querying tables of different sizes (10K, 100K, 1s, 10s, 100s of
millions of rows).
The distinct way tends to sort/unique
On Wed, 2005-03-02 at 01:51 -0500, Ken Egervari wrote:
>
> select s.*
> from shipment s
> inner join carrier_code cc on s.carrier_code_id = cc.id
> inner join carrier c on cc.carrier_id = c.id
> inner join carrier_to_person ctp on ctp.carrier_id = c.id
> inner join person p on p.i
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id = ctp.person_id
inner join shipment_status cs on s.current_status
On Wed, 2005-03-02 at 13:28 -0500, Ken Egervari wrote:
> >> select s.*
> >> from shipment s
> >> inner join carrier_code cc on s.carrier_code_id = cc.id
> >> inner join carrier c on cc.carrier_id = c.id
> >> inner join carrier_to_person ctp on ctp.carrier_id = c.id
> >> inner join p
left join is for eager loading so that I don't have to run a seperate
query
to fetch the children for each shipment. This really does improve
performance because otherwise you'll have to make N+1 queries to the
database, and that's just too much overhead.
are you saying that you are actually doing
it might help the planner estimate better the number of cs rows
affected. whether this improves performance depends on whether
the best plans are sensitive to this.
I managed to try this and see if it did anything. Unfortunately, it made no
difference. It's still 250 milliseconds. It was a good
"Ken Egervari" <[EMAIL PROTECTED]> writes:
> Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
---
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
My machine is WinXP professional, athon xp 2100, but I g
"Ken Egervari" <[EMAIL PROTECTED]> writes:
>> What platform is this on? It seems very strange/fishy that all the
>> actual-time values are exact integral milliseconds.
> My machine is WinXP professional, athon xp 2100, but I get similar results
> on my Intel P4 3.0Ghz as well (which is also runn
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 02, 2005 4:30 PM
> To: Ken Egervari
> Cc: pgsql-performance@postgresql.org;
> [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Help with tuning this query (with
> explain analyze
> finally)
>
> [...]
> Wel
Tom Lane wrote:
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
I always get round milliseconds on running. In fa
If so, I'd have to take the EXPLAIN ANALYZE results with a big grain of
salt, because what it's trying to do is add up a lot of
mostly-sub-millisecond intervals. What would essentially happen is that
whichever plan node had control at a particular millisecond boundary
would get charged for the who
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's because
it's going over t
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference conceptually
since it drives on the person table quickly. But to my surprise, the query
runs at about 375 milliseconds. I think it's because it's going over that
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method and created all the right indexes
from his diagraming method, but the query still performs quite slow both inside
the application and just inside pgadmin III. Can anyone be kind enough to help
me tune it so that it performs better in
Ken Egervari wrote:
I took John's advice and tried to work with sub-selects. I tried
this variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's
becau
Shawn,
> I can also change the schema to a certain extent, so would it be worthwhile
> to put indices on the queried tables (or refactor them) hoping the distinct
> does an index scan instead of sort... would the query planner take
> advantage of that?
Use the GROUP BY, with an index on the grou
Ken,
> I've tried to use Dan Tow's tuning method and created all the right indexes
> from his diagraming method, but the query still performs quite slow both
> inside the application and just inside pgadmin III. ÂCan anyone be kind
> enough to help me tune it so that it performs better in postgres
Ken,
> -> Merge Join (cost=602.54..1882.73 rows=870 width=91) (actual
> time=234.000..312.000 rows=310 loops=1)
> Merge Cond: ("outer".current_status_id = "inner".id)
Hmmm ... this merge join appears to be the majority of your execution
time at least within the resol
John,
Why are you now left joining driver and carrier code, but inner joining
shipment_status? I assume this is the *real* query that you are executing.
Well, the old and new versions are real queries. I changed the query a bit
because I noticed for some users, the listing was pulling out many di
Josh,
1) To determine your query order ala Dan Tow and drive off of person,
please
SET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the
query. (Not that I believe in Dan Tow ... see previous message ... but it
would be interesting to see the results.
Unfortunately, the query st
Ken Egervari wrote:
Hash IN Join (cost=676.15..1943.11 rows=14 width=91) (actual
time=250.000..328.000 rows=39 loops=1)
Hash Cond: ("outer".carrier_code_id = "inner".id)
-> Merge Join (cost=661.65..1926.51 rows=392 width=91) (actual
time=250.000..328.000 rows=310 loops=1)
Merge Cond:
29 matches
Mail list logo