=0.004..0.011 rows=7 loops=924536)
Index Cond: ((outer.starttimetrunc = du.ts) AND
(outer.finishtimetrunc = du.ts))
Total runtime: 41635.468 ms
(5 rows)
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a
}
finishtimetrunc| {2005-04-04 00:05:00.93,2005-04-04
11:53:00.98,2005-04-04 22:35:00.38,2005-04-05
11:13:00.02,2005-04-05 21:31:00.98,2005-04-06
10:45:01,2005-04-07 02:08:08.25,2005-04-07 16:20:00.93,2005-04-08
10:25:00.40,2005-04-08 17:15:00.94,2005-04-11 02:08:19}
--
Richard van
increases its cost with 100M. Since my query already has a cost of about
400M-800M this doesn't matter much.
For now, the only reliable way of forcing the use of the index is to set
cpu_tuple_cost = 1.
--
Richard van den Berg, CISSP
---
Trust Factory B.V
as mine (actually, yours is much better). I can
put together a reproducable test case if you like..
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a| www.trust-factory.com
2518AK The Hague | Phone: +31 70
of the tables? I make very sure (during the initial load and
while testing) that I vacuum analyze all tables after I fill them.
I'm runing postgres 7.4.7.
Any help is appreciated.
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
?
Your explanation sounds very plausible.. I don't mind changing the
cpu_tuple_cost before running BETWEEN with timestamps, they are easy
enough to spot.
Thanks,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a
..0.011 rows=7 loops=924536)
Index Cond: ((outer.starttimetrunc = du.ts) AND
(outer.finishtimetrunc = du.ts))
Total runtime: 44337.937 ms
The explain analyze for cpu_tuple_cost = 0.01 is running now. If it
takes hours, I'll send it to the list tomorrow.
--
Richard van den Berg, CISSP
rather not have
to drop/add them when I am loading large data sets.
If it would help I can write this out in a reproducable scenario. I am
using postgresql 7.4.5 at the moment.
Sincerely,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna
, it is done on a per-row bases. With 1M
rows, this just takes forever.
Thanks for the help.
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a| www.trust-factory.com
2518AK The Hague | Phone: +31 70 3620684
command locally (via a Unix socket) yields 2.5 ms every
time.
Am I correct is assuming that the timings are calculated locally by psql
on my client, thus including network latency?
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Christopher Kings-Lynne wrote:
No explain analyze is done on the server...
Yes, but the psql \timing is calculated on the client, right? That is
the value that PFC was refering to.
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna
the network
(the \timing on those are about 50ms which includes the network latency).
Thanks,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a| www.trust-factory.com
2518AK The Hague | Phone: +31 70 3620684
not have
to drop/add them when I am loading large data sets.
If it would help I can write this out in a reproducable scenario. I am
using postgresql 7.4.5 at the moment.
Sincerely,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
after the copy/insert?
Sincerely,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a| www.trust-factory.com
2518AK The Hague | Phone: +31 70 3620684
The Netherlands| Fax : +31 70 3603009
pg_class.oid=tgrelid and relname='YOUR_TABLE_NAME';
I assume the re-enabling will cause an error when the copy/insert added
data that does not satisfy the FK. In that case I'll indeed end up with
invalid data, but at least I will know about it.
Thanks,
--
Richard van den Berg
own integirty
logic and avoid using FKs all together.
Thanks,
--
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a| www.trust-factory.com
2518AK The Hague | Phone: +31 70 3620684
The Netherlands| Fax
16 matches
Mail list logo