Doesn't an Oracle before insert trigger carry extra
performance overhead?

--- "Peter J. Holzer" <[EMAIL PROTECTED]> wrote:
> On 2004-06-16 09:14:25 -0600, Reidy, Ron wrote:
> > Better?  Maybe.  Consider.
> > 
> > Your program connects via SQL*net to the DB.  At
> insert time, you need to fetch a sequence, so you
> issue 'select seq.nextval from dual'.  Seems
> harmless enough until ...
> > 
> > 1.  The select statement must be passed across the
> SQL*Net connection.
> > 2.  The statement must be parsed.  Maybe the worst
> you will have here is a soft parse, but non the
> less, you are using the CPU for this.
> > 3.  The statement is executed and the fetch
> occurs.
> > 4.  The result is passed back to your program.
> > 5.  You bind the value (hopefully) to your
> statement handle.
> > 6.  You then execute the insert statement.
> > 
> > If you use a BEFORE INSERT trigger to fetch the
> sequence, you will eliminate steps 1, 2, 3, 4, 5
> from the flow.
> 
> You can get the next value from a sequence in the
> insert statement (see
> my previous mail in this thread), so with or without
> the trigger, you
> have to parse (once) and execute (possibly many
> times) only the insert
> statement.
> 
> The difference is:
> 
> With a trigger, you hide the details of the sequence
> from the insert
> statement.
> This makes the insert statement simpler and more
> portable. OTOH, you
> don't know which value you just inserted (you can
> get at it with "select
> $sequence.currval from dual" but then you lose these
> advantages).
> 
>       hp
> 
> -- 
>    _  | Peter J. Holzer      | Shooting the users in
> the foot is bad. 
> |_|_) | Sysadmin WSR / LUGA  | Giving them a gun
> isn't.
> | |   | [EMAIL PROTECTED]        |    -- Gordon Schumacher,
> __/   | http://www.hjp.at/   |     mozilla bug
> #84128
> 

> ATTACHMENT part 2 application/pgp-signature 


Reply via email to