Hello!
I've been running some heavy joins on my postgres database, and I had it
configured with '-S65536' to use 64Mb of RAM for each backend sort memory,
since the FAQ says:
"You can also use the backend -S option to increase the maximum amount of
memory used by each backend process for temporary sorts. "
When I do a four way join, it appears to be using way way more than 64Mb for
one backend (after about 15 seconds run, the postgres backend was at more
than 400Mb, going into the swap, and I had only 128Mb as 'shared cache').
Could it be that it actually results in 64Mb *per sort operation*, and not
per backend? (It works if I lower it to 16Mb, but it still uses much more
than 16Mb).
Unfort, I don't know how many internal sorts it does on my query, so I'm not
sure if this is right - the explain only shows one, but I guess it uses that
somewhere when it does joins as well?
The EXPLAIN for my query gives:
Aggregate (cost=817167.33..819742.33 rows=34333 width=44)
-> Group (cost=817167.33..818025.66 rows=343334 width=44)
-> Sort (cost=817167.33..817167.33 rows=343334 width=44)
-> Hash Join (cost=54869.83..774184.06 rows=343334 width=44)
-> Hash Join (cost=54646.83..743752.94 rows=1719035
width=36)
-> Seq Scan on order_items (cost=0.00..345419.03
rows=21748603 width=12)
-> Hash (cost=53751.50..53751.50 rows=358131
width=24)
-> Hash Join (cost=1.25..53751.50
rows=358131 width=24)
-> Seq Scan on orders
(cost=0.00..27500.00 rows=1500000 width=8)
-> Hash (cost=1.20..1.20 rows=20
width=16)
-> Seq Scan on salesstaff
(cost=0.00..1.20 rows=20 width=16)
-> Hash (cost=198.00..198.00 rows=10000 width=8)
-> Seq Scan on items (cost=0.00..198.00
rows=10000 width=8)
My plaform is Linux 2.2.14, Pentium-III 500MHz, 384Mb RAM.
PostgreSQL 7.0.0 on i686-pc-linux-gnu, compiled by gcc 2.95.2
//Magnus