"Kneringer, Armin" <[email protected]> writes:
> I think I found a potential performance gain if the query planner would be
> optimized. All Tests has been performed with 8.4.1 (and earlier versions) on
> CentOS 5.3 (x64)
> The following query will run on my database (~250 GB) for ca. 1600 seconds
> and the sort will result in a disk merge deploying ca. 200 GB of data to the
> local disk (ca. 180.000 tmp-files)
What have you got work_mem set to? It looks like you must be using an
unreasonably large value, else the planner wouldn't have tried to use a
hash join here:
> -> Hash (cost=11917516.57..11917516.57
> rows=55006045159 width=16)
> -> Nested Loop (cost=0.00..11917516.57
> rows=55006045159 width=16)
> -> Seq Scan on atdateval t5
> (cost=0.00...294152.40 rows=1859934 width=12)
> Filter: (attrid =
> 281479288456447::bigint)
> -> Index Scan using ind_ataggval on
> ataggval q1_1 (cost=0.00..6.20 rows=4 width=12)
> Index Cond: ((q1_1.attrid =
> 281479288456451::bigint) AND (q1_1.aggval = t5.aggrid))
> Filter: (q1_1.aggrid = 0)
Also, please try something newer than 8.4.1 --- this might be some
already-fixed bug.
regards, tom lane
--
Sent via pgsql-performance mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance