> > RDTSC is a bad source of information for this kind of thing, as the
> > CPU frequency might vary.
>
> One thought that was bothering me was that if the CPU goes
> idle while waiting for disk I/O, its clock might stop or slow
> down dramatically.
> If we believed such a counter for EXPLAIN,
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Dave Held wrote:
There is always clock().
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and
it had a resolution of 55clocks / s. When I just did this:
The other problem is it measures process CP
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf
> Of Tom Lane
> Sent: 07 March 2005 22:57
> To: John A Meinel
> Cc: Dave Held; pgsql-performance@postgresql.org;
> [EMAIL PROTECTED]
> Subject: Re: [pgsql-hackers-win32]
From the Linux Kernel (make menuconfig) there seem to be two new reliable
sources for timing information. Note the remark about "Time Stamp Counter"
below. Question is, which one of these (or others) are your API functions
using ? I have absolutely no idea !
CONFIG_HPET_TIMER:
On Mon, Mar 07, 2005 at 09:02:38PM -0500, Tom Lane wrote:
> One thought that was bothering me was that if the CPU goes idle while
> waiting for disk I/O, its clock might stop or slow down dramatically.
> If we believed such a counter for EXPLAIN, we'd severely understate
> the cost of disk I/O.
>
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> RDTSC is a bad source of information for this kind of thing, as the CPU
> frequency might vary.
One thought that was bothering me was that if the CPU goes idle while
waiting for disk I/O, its clock might stop or slow down dramatically.
If we bel
On Mon, Mar 07, 2005 at 06:11:34PM -0600, Dave Held wrote:
>> In which case using it would be a mistake. Since rtdsc doesn't
>> work across processors.
> It doesn't always use RDTSC. I can't find anything authoritative on
> when it does. I would assume that it would use RDTSC when available
> and
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 07, 2005 4:57 PM
> To: John A Meinel
> Cc: Dave Held; pgsql-performance@postgresql.org;
> [EMAIL PROTECTED]
> Subject: Re: [pgsql-hackers-win32] [PERFORM] Help with tuning
> this
Re: [pgsql-hackers-win32] [PERFORM] Help with tuning
> this query
> (with
>
> "Dave Held" <[EMAIL PROTECTED]> writes:
>
> > > What would be really neato would be to use the rtdsc (sp?) or
> > > equivalent assembly instruction where available
"Dave Held" <[EMAIL PROTECTED]> writes:
> > What would be really neato would be to use the rtdsc (sp?) or
> > equivalent assembly instruction where available. Most processors
> > provide such a thing and it would give much lower overhead and much
> > more accurate answers.
> >
> > The main prob
John A Meinel <[EMAIL PROTECTED]> writes:
> Dave Held wrote:
>> There is always clock().
> My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, and
> it had a resolution of 55clocks / s. When I just did this:
The other problem is it measures process CPU time, not elapsed time
whi
Dave Held wrote:
There is always clock(). It's mandated by ANSI C, but my docs say
that POSIX requires CLOCKS_PER_SEC == 100 regardless of actual
timer resolution, which seems a little brain-dead to me.
__
David B. Held
My experience with clock() on win32 is that CLOCKS_PER_SEC was 1000, an
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 07, 2005 10:39 AM
> To: John A Meinel
> Cc: Magnus Hagander; Ken Egervari; pgsql-performance@postgresql.org;
> [EMAIL PROTECTED]
> Subject: Re: [pgsql-hackers-win32] [PERFORM] Hel
> -Original Message-
> From: Greg Stark [mailto:[EMAIL PROTECTED]
> Sent: Monday, March 07, 2005 12:06 PM
> To: John A Meinel
> Cc: Tom Lane; Magnus Hagander; Ken Egervari;
> pgsql-performance@postgresql.org; [EMAIL PROTECTED]
> Subject: Re: [pgsql-hackers-win32] [PERF
John A Meinel <[EMAIL PROTECTED]> writes:
> Then we would only be wrong for 256 gettimeofday calls. I agree it isn't
> great, though. And probably better to just abstract (possibly just with
> #ifdef) the calls for accurate timing, from the calls that actually need
> the real time.
What would be
Tom Lane wrote:
John A Meinel <[EMAIL PROTECTED]> writes:
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor/instrument.c in particular.
Or we modify the win32 gettimeofday call to som
John A Meinel <[EMAIL PROTECTED]> writes:
>>> Can we just replace gettimeofday() with a version that's basically:
>>
>> No, because it's also used for actual time-of-day calls. It'd be
>> necessary to hack executor/instrument.c in particular.
> Or we modify the win32 gettimeofday call to somethi
Tom Lane wrote:
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
There is. I beleive QueryPerformanceCounter has sub-mirosecond
resolution.
Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> There is. I beleive QueryPerformanceCounter has sub-mirosecond
> resolution.
> Can we just replace gettimeofday() with a version that's basically:
No, because it's also used for actual time-of-day calls. It'd be
necessary to hack executor/instrumen
> > Do we need actual high precision time, or do we just need
> to be able
> > to get high precision differences? Getting the differences
> is fairly
> > easy, but if you need to "sync up" any drif then it becomes
> a bit more
> > difficult.
>
> You're right, we only care about differences n
"Magnus Hagander" <[EMAIL PROTECTED]> writes:
> Do we need actual high precision time, or do we just need to be able to
> get high precision differences? Getting the differences is fairly easy,
> but if you need to "sync up" any drif then it becomes a bit more
> difficult.
You're right, we only ca
> >> What platform is this on? It seems very strange/fishy
> that all the
> >> actual-time values are exact integral milliseconds.
>
> > My machine is WinXP professional, athon xp 2100, but I get similar
> > results on my Intel P4 3.0Ghz as well (which is also
> running WinXP).
> > Why do y
John Arbash Meinel wrote:
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure ho
Ken,
> I did everything you said and my query does perform a bit better. I've
> been getting speeds from 203 to 219 to 234 milliseconds now. I tried
> increasing the work mem and the effective cache size from the values you
> provided, but I didn't see any more improvement. I've tried to looki
Ken Egervari wrote:
Josh,
...
I thought about this, but it's very important since shipment and
shipment_status are both updated in real time 24/7/365. I think I
might be able to cache it within the application for 60 seconds at
most, but it would make little difference since people tend to refresh
Ken wrote:
Richard,
What do you mean by summary table? Basically a cache of the query
into a table with replicated column names of all the joins? I'd
probably have to whipe out the table every minute and re-insert the
data for each carrier in the system. I'm not sure how expensive this
operation
Richard,
What do you mean by summary table? Basically a cache of the query into a
table with replicated column names of all the joins? I'd probably have to
whipe out the table every minute and re-insert the data for each carrier in
the system. I'm not sure how expensive this operation would b
Ken Egervari wrote:
Let's say we have 200 users signed into the application at the same
time. The application refreshes their shipment information automatically
to make sure it's up to date on the user's screen. The application will
execute the query we are trying to tune every 60 seconds for m
Josh,
Thanks so much for your comments. They are incredibly insightful and you
clearly know your stuff. It's so great that I'm able to learn so much from
you. I really appreciate it.
Do you need the interior sort? It's taking ~93ms to get 7k rows from
shipment_status, and then another 30ms to
Ken Egervari wrote:
Josh,
I did everything you said and my query does perform a bit better.
I've been getting speeds from 203 to 219 to 234 milliseconds now. I
tried increasing the work mem and the effective cache size from the
values you provided, but I didn't see any more improvement. I've
tri
Josh,
I did everything you said and my query does perform a bit better. I've been
getting speeds from 203 to 219 to 234 milliseconds now. I tried increasing
the work mem and the effective cache size from the values you provided, but
I didn't see any more improvement. I've tried to looking in
Ken,
Well, I'm a bit stumped on troubleshooting the actual query since Windows'
poor time resolution makes it impossible to trust the actual execution times.
Obviously this is something we need to look into for the Win32 port for
8.1 ..
> shared_buffers = 1000
This may be slowing up that mer
2) Force PG to drop the merge join via SET ENABLE_MERGEJOIN = FALSE;
Actually, it was 312 milliseconds, so it got worse.
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
Ken Egervari wrote:
Hash IN Join (cost=676.15..1943.11 rows=14 width=91) (actual
time=250.000..328.000 rows=39 loops=1)
Hash Cond: ("outer".carrier_code_id = "inner".id)
-> Merge Join (cost=661.65..1926.51 rows=392 width=91) (actual
time=250.000..328.000 rows=310 loops=1)
Merge Cond:
Josh,
1) To determine your query order ala Dan Tow and drive off of person,
please
SET JOIN_COLLAPSE_LIMIT = 1 and then run Mark Kirkwood's version of the
query. (Not that I believe in Dan Tow ... see previous message ... but it
would be interesting to see the results.
Unfortunately, the query st
John,
Why are you now left joining driver and carrier code, but inner joining
shipment_status? I assume this is the *real* query that you are executing.
Well, the old and new versions are real queries. I changed the query a bit
because I noticed for some users, the listing was pulling out many di
Ken,
> -> Merge Join (cost=602.54..1882.73 rows=870 width=91) (actual
> time=234.000..312.000 rows=310 loops=1)
> Merge Cond: ("outer".current_status_id = "inner".id)
Hmmm ... this merge join appears to be the majority of your execution
time at least within the resol
Ken,
> I've tried to use Dan Tow's tuning method and created all the right indexes
> from his diagraming method, but the query still performs quite slow both
> inside the application and just inside pgadmin III. ÂCan anyone be kind
> enough to help me tune it so that it performs better in postgres
Ken Egervari wrote:
I took John's advice and tried to work with sub-selects. I tried
this variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's
becau
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method and created all the right indexes
from his diagraming method, but the query still performs quite slow both inside
the application and just inside pgadmin III. Can anyone be kind enough to help
me tune it so that it performs better in
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference conceptually
since it drives on the person table quickly. But to my surprise, the query
runs at about 375 milliseconds. I think it's because it's going over that
I took John's advice and tried to work with sub-selects. I tried this
variation, which actually seems like it would make a difference
conceptually since it drives on the person table quickly. But to my
surprise, the query runs at about 375 milliseconds. I think it's because
it's going over t
If so, I'd have to take the EXPLAIN ANALYZE results with a big grain of
salt, because what it's trying to do is add up a lot of
mostly-sub-millisecond intervals. What would essentially happen is that
whichever plan node had control at a particular millisecond boundary
would get charged for the who
Tom Lane wrote:
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
I always get round milliseconds on running. In fa
> -Original Message-
> From: Tom Lane [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, March 02, 2005 4:30 PM
> To: Ken Egervari
> Cc: pgsql-performance@postgresql.org;
> [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Help with tuning this query (with
> e
"Ken Egervari" <[EMAIL PROTECTED]> writes:
>> What platform is this on? It seems very strange/fishy that all the
>> actual-time values are exact integral milliseconds.
> My machine is WinXP professional, athon xp 2100, but I get similar results
> on my Intel P4 3.0Ghz as well (which is also runn
"Ken Egervari" <[EMAIL PROTECTED]> writes:
Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
My machine is WinXP professional, athon xp 2100, but I g
"Ken Egervari" <[EMAIL PROTECTED]> writes:
> Okay, here is the explain analyze I managed to get from work.
What platform is this on? It seems very strange/fishy that all the
actual-time values are exact integral milliseconds.
regards, tom lane
---
it might help the planner estimate better the number of cs rows
affected. whether this improves performance depends on whether
the best plans are sensitive to this.
I managed to try this and see if it did anything. Unfortunately, it made no
difference. It's still 250 milliseconds. It was a good
left join is for eager loading so that I don't have to run a seperate
query
to fetch the children for each shipment. This really does improve
performance because otherwise you'll have to make N+1 queries to the
database, and that's just too much overhead.
are you saying that you are actually doing
On Wed, 2005-03-02 at 13:28 -0500, Ken Egervari wrote:
> >> select s.*
> >> from shipment s
> >> inner join carrier_code cc on s.carrier_code_id = cc.id
> >> inner join carrier c on cc.carrier_id = c.id
> >> inner join carrier_to_person ctp on ctp.carrier_id = c.id
> >> inner join p
select s.*
from shipment s
inner join carrier_code cc on s.carrier_code_id = cc.id
inner join carrier c on cc.carrier_id = c.id
inner join carrier_to_person ctp on ctp.carrier_id = c.id
inner join person p on p.id = ctp.person_id
inner join shipment_status cs on s.current_status
On Wed, 2005-03-02 at 01:51 -0500, Ken Egervari wrote:
>
> select s.*
> from shipment s
> inner join carrier_code cc on s.carrier_code_id = cc.id
> inner join carrier c on cc.carrier_id = c.id
> inner join carrier_to_person ctp on ctp.carrier_id = c.id
> inner join person p on p.i
Ken Egervari wrote:
First, what version of postgres, and have you run VACUUM ANALYZE
recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I
won't be at work for a f
First, what version of postgres, and have you run VACUUM ANALYZE recently?
Also, please attach the result of running EXPLAIN ANALYZE.
(eg, explain analyze select s.* from shipment ...)
I'm using postgres 8.0. I wish I could paste explain analyze, but I won't
be at work for a few days. I was hopi
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method and created all the right
indexes from his diagraming method, but the query still performs quite
slow both inside the application and just inside pgadmin III. Can
anyone be kind enough to help me tune it so that it performs better in
po
Bricklen Anderson wrote:
> Richard Huxton wrote:
> > Ken Egervari wrote:
> >
> >> I've tried to use Dan Tow's tuning method
> > Who? What?
>
> http://www.singingsql.com/
That URL is invalid for me.
--
Bruce Momjian| http://candle.pha.pa.us
pgman@candle.pha.pa.us
Richard Huxton wrote:
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
http://www.singingsql.com/
Dan has written some remarkable papers on sql tuning. Some of it is pretty complex, but his book
"SQL Tuning" is an excellent resource.
--
___
This
Ken Egervari wrote:
I've tried to use Dan Tow's tuning method
Who? What?
> and created all the right
indexes from his diagraming method, but the query still performs
quite slow both inside the application and just inside pgadmin III.
Can anyone be kind enough to help me tune it so that it performs
I've tried to use Dan Tow's tuning method and
created all the right indexes from his diagraming method, but the query still
performs quite slow both inside the application and just inside pgadmin
III. Can anyone be kind enough to help me tune it so that it performs
better in postgres? I do
60 matches
Mail list logo