Based on our requirements I use audit tables but do not use triggers. I just
use a command in my application or form when I want to capture audit data.
Something simple like this.
if datachanged = 'true' then
insert into audittable values ((cval('netuser')),.....)
endif
I do realize this is more work because you have to put it in everywhere you
want to audit data.
Dan Goldberg
_____
From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf Of John Engwer
Sent: Tuesday, September 09, 2008 7:51 AM
To: RBASE-L Mailing List
Subject: [RBASE-L] - Re: Triggers
Thanks for your input Larry. I have been using triggers for a long time and
I am aware that there is overhead involved. I never realized how much
overhead until I applied them to a very large table and did updates of large
quantities of data. When I drop the trigger the update is lighting fast
(just as you pointed out). The problem with dropping the trigger, is that
it is dropped for other users also. This means that other users will not be
writing audit files when the trigger is disabled. This is a serious problem
for me.
John
From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf Of Lawrence
Lustig
Sent: Tuesday, September 09, 2008 9:45 AM
To: RBASE-L Mailing List
Subject: [RBASE-L] - Re: Triggers
<<
The problem that I have is that the trigger adds an enormous amount of
processing time (10 times longer), even when the trigger is being bypassed.
As a work around, I drop the trigger when the trigger when it is not
needed, instead of using the bypass.
Is this behavior normal or do I have something wrong with my code or my
process.
>>
This behavior is normal and expected as far as I know.
The bulk of the overhead with triggers is in detecting the existence of the
trigger, finding the stored procedure code, and executing it. The fact that
you skip over most of the code in your trigger doesn't eliminate that basic
overhead. R:Base is extraordinarily fast at basic UPDATE operations, but
when you stick an interpreted code process in the middle of update to each
row, you're invariably going to slow down the operation substantially.
Even if RBTI were to go all out optimizing trigger performance, I'm sure
that using a trigger will slow down an update by a factor of several hundred
percent.
You're left with dropping and re-adding the trigger, which is not a great
solution (the purpose of a trigger, after all, is to become an unavoidable
part of the database structure), accepting the performance hit, or
refactoring your database in some way to separate out the "trigger" and
"non-trigger" rows into separate tables.
--
Larry