"Brad Might" <[EMAIL PROTECTED]> writes:
> This seems to me to be an expensive plan and I'm wondering if there's a
> way to improve it or a better way to do what I'm trying to do here (get
> a count of distinct values for each record_id and map that value to the
> entity type) entity_type_id_mapping is 56 rows
> volume_node_entity_data_values is approx 500,000,000 rows vq_record_id
> has approx 11,000,000 different values vq_entity_type is a value in
> entity_type_id_mapping.entity_type

Hmm, what Postgres version is that?  And have you ANALYZEd
entity_type_id_mapping lately?  I'd expect the planner to realize that
there cannot be more than 56 output groups, which ought to lead it to
prefer a hashed aggregate over the sort+group method.  That's what I
get in a test case with a similar query structure, anyway.

If you're stuck on an old PG version, it might help to do the
aggregation first and then join, ie

        select ... from
        (select count(vq_entity_value) as vcount, vq_entity_type
         from data_schema.volume_queue_entities group by vq_entity_type) qe,
        volume_8.entity_type_id_mapping emap
        where qe.vq_entity_type = emap.entity_type;

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to