Hi, Carlo,

Carlo Stonebanks wrote:
>> Did you think about putting the whole data into PostgreSQL using COPY in
>> a nearly unprocessed manner, index it properly, and then use SQL and
>> stored functions to transform the data inside the database to the
>> desired result?
> 
> This is actually what we are doing. The slowness is on the row-by-row 
> transformation. Every row reqauires that all the inserts and updates of the 
> pvious row be committed - that's why we have problems figuring out how to 
> use this using SQL set logic.

Maybe "group by", "order by", "distinct on" and hand-written functions
and aggregates (like first() or best()) may help.

You could combine all relevant columns into an user-defined compund
type, then group by entity, and have a self-defined aggregate generate
the accumulated tuple for each entity.

Markus
-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in Europe! www.ffii.org
www.nosoftwarepatents.org

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to