> Maybe "group by", "order by", "distinct on" and hand-written functions
> and aggregates (like first() or best()) may help.

We use these - we have lexical analysis functions which assign a rating to 
each row in a set, and the likelyhood that the data is a match, and then we 
sort our results.

I thought this would be the cause of the slowdowns - and it is, but a very 
small part of it. I have identified the problem code, and the problems are 
within some very simple joins. I have posted the code under a related topic 
header. I obviously have a few things to learn about optimising SQL joins.

Carlo

>
> You could combine all relevant columns into an user-defined compund
> type, then group by entity, and have a self-defined aggregate generate
> the accumulated tuple for each entity.
>
> Markus
> -- 
> Markus Schaber | Logical Tracking&Tracing International AG
> Dipl. Inf.     | Software Development GIS
>
> Fight against software patents in Europe! www.ffii.org
> www.nosoftwarepatents.org
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
> 



---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to