On Sep 4, 2006, at 7:04 PM, Bruno Wolff III wrote:

On Mon, Sep 04, 2006 at 17:19:37 +0200,
  Hans-Juergen Schoenig <[EMAIL PROTECTED]> wrote:

i thought about creating an index on the expression but the problem
is that this is hardly feasable.
in 8.0 (what i have here) this would block the table and i would run

That may be hard to deal with.



it is ...
but the problem is not primarily that i have some problem with a certain query. somehow this can be solved somehow. i am thinking about GROUP BY and estimates in general here ...
just wondering if there is a chance to improve ...

out of disk space as well. this is a 600 gb biest :(

I wouldn't expect this to be a problem. If you have 10^9 rows, I would expect the index to be less than 10% of you current size. If you are so close to your disk space limit that that is a problem, you have a problem in any case.



the index itself is not too large but when building it up it is written several times. it is not funny when dealing with so much data ...



what about the planner approach?
this would solve the problem for some other issues as well. an index
might not be flexible enough :(.

If you disable sorting you might be able to get it to switch plans. Lying
about the amount of work memory so that the planner thinks the hash
will fit in memory despite its misguessing the number of buckets might also
help.


setting work_mem to 2gb does not help here ;)
set it to the max value on 8.0.
this was my first try too.
the problem is - there is no magic switch to mislead the planner a little without hacking the system stats (which is not what people should do i would say ;) ).

my question is: is adding hooks for selectivity a feasable way of dealing with things like that?

        hans





---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

Reply via email to