Dennis Bjorklund wrote:
> On Thu, 23 Jun 2005, Bricklen Anderson wrote:
>
>
>>iii. UNIQUE constraint on table "t1". This didn't seem to perform too
>>badly with fewer rows (preliminary tests), but as you'd expect, on error
>>the whole transaction would roll back. Is it possible to skip a row if
>
On Thu, 23 Jun 2005, Bricklen Anderson wrote:
> iii. UNIQUE constraint on table "t1". This didn't seem to perform too
> badly with fewer rows (preliminary tests), but as you'd expect, on error
> the whole transaction would roll back. Is it possible to skip a row if
> it causes an error, as opposed
Jacques Caron wrote:
>
> I have a similar situation, and the solution I use (though I haven't
> really tested many different situations):
> - have a trigger ON INSERT which does:
> UPDATE set whatever_value=NEW.whatever_value,... WHERE
> whatever_key=NEW.whatever.key AND...
> IF FOUND THEN
> RETU
Hi,
At 21:38 23/06/2005, Bricklen Anderson wrote:
Situation:
I'm trying to optimize an ETL process with many upserts (~100k aggregated
rows)
(no duplicates allowed). The source (table t2) table holds around 14 million
rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in abo
Meetesh Karia wrote:
> I don't know what this will change wrt how often you need to run VACUUM
> (I'm a SQL Server guy), but instead of an update and insert, try a
> delete and insert. You'll only have to find the duplicate rows once and
> your insert doesn't need a where clause.
>
> Meetesh
>
V
I don't know what this will change wrt how often you need to run VACUUM
(I'm a SQL Server guy), but instead of an update and insert, try a
delete and insert. You'll only have to find the duplicate rows
once and your insert doesn't need a where clause.
MeeteshOn 6/23/05, Bricklen Anderson <[EMAIL
Situation:
I'm trying to optimize an ETL process with many upserts (~100k aggregated rows)
(no duplicates allowed). The source (table t2) table holds around 14 million
rows, and I'm grabbing them 100,000 rows at a time from t2, resulting in about
100,000 distinct rows in the destination table (t1).