At 21:38 23/06/2005, Bricklen Anderson wrote:
I'm trying to optimize an ETL process with many upserts (~100k aggregated
(no duplicates allowed). The source (table t2) table holds around 14 million
rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about
100,000 distinct rows in the destination table (t1).
What I've tried:
i. FOR EXECUTE LOOP over my result set (aggregated results, 100k-ish
try an update first, check the ROW_COUNT, if 0, then do an insert.
run time: approx. 25 mins
ii. in a function (pseudo code), (table name is dynamic):
'UPDATE '||t1||' SET x=t2.x
FROM (select sum(x),a,b,c
group by a,b,c) as t2
WHERE '||t1||'.a=t2.a AND '||t1||'.b=t2.b AND '||t1||'.c=t3.c';
'INSERT INTO '||t1||' (x,a,b,c) select x,a,b,c
FROM (select sum(x) as x,a,b,c from t2 group by a,b,c) as t2
WHERE NOT EXISTS
(select true from '||t1||'
I have a similar situation, and the solution I use (though I haven't really
tested many different situations):
- have a trigger ON INSERT which does:
UPDATE set whatever_value=NEW.whatever_value,... WHERE
IF FOUND THEN
- use COPY
For optimal performance, a different trigger function is created for each
table, which allows the query plan of the UPDATE to be cached.
Let us know how that works out for you and if you find a better solution!
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])