Tom Lane wrote:
The explain shows no such thing. What is the *actual* runtime of
each plan per EXPLAIN ANALYZE, please?
Ok, it took 3.5 hours to complete. :-/
This is with the default cpu_tuple_cost = 0.01:
Nested Loop (cost=252.80..233010147.16 rows=1035480320 width=98)
(actual
Thanks a lot John for the correct search terms. :-)
The suggestion in
http://archives.postgresql.org/pgsql-performance/2005-04/msg00029.php to
add a constraint that checks (finishtime = starttime) does not make a
difference for me. Still seq scans are used.
The width solution explained in
Richard van den Berg [EMAIL PROTECTED] writes:
This is with the default cpu_tuple_cost = 0.01:
Nested Loop (cost=252.80..233010147.16 rows=1035480320 width=98)
(actual time=0.369..12672213.137 rows=6171334 loops=1)
Join Filter: ((outer.starttimetrunc = inner.ts) AND
John A Meinel wrote:
You might try doing:
ALTER TABLE us ALTER COLUMN starttimetrunc SET STATISTICS 200;
ALTER TABLE us ALTER COLUMN finishtimetrunc SET STATISTICS 200;
VACUUM ANALYZE us;
I've been looking into that. While increasing the statistics makes the
planner use the index for simple
Tom Lane wrote:
which is almost 80% of the entire runtime. Which is enormous.
What are those column datatypes exactly?
Table richard.sessions
Column |Type | Modifiers
Richard van den Berg [EMAIL PROTECTED] writes:
Tom Lane wrote:
Perhaps you are incurring a datatype conversion cost?
Not that I can tell.
No, apparently not. Hmm ... timestamp_cmp_internal is just a couple of
isnan() checks and one or two floating-point compares. Should be pretty
dang
Richard van den Berg wrote:
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan on session table *
Richard van den Berg [EMAIL PROTECTED] writes:
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan
John A Meinel wrote:
I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.
And it does when using BETWEEN with int for example? Impressive. :-)
select blah from du WHERE time between '2004-10-10' and '2004-10-15';
Will properly use
Richard van den Berg wrote:
John A Meinel wrote:
I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.
And it does when using BETWEEN with int for example? Impressive. :-)
select blah from du WHERE time between '2004-10-10' and
Tom Lane wrote:
The explain shows no such thing. What is the *actual* runtime of
each plan per EXPLAIN ANALYZE, please?
I took a simplified version of the problem (the actual query that took 6
hours joins 3 tables). With cpu_tuple_cost = 0.1:
Nested Loop (cost=0.00..667700310.42
11 matches
Mail list logo