On Wed, Sep 4, 2024 at 6:29 PM Muhammad Usman Khan <usma...@bitnine.net>
wrote:

> Hi,
>
> In your scenario, triggers can add some overhead since they require extra
> processing after each update operation. Considering the size of your table
> and the high transaction volume, you need to observe that this might
> significantly affect performance.
>
>
>
>
> On Wed, 4 Sept 2024 at 17:50, yudhi s <learnerdatabas...@gmail.com> wrote:
>
>> Hello,
>> In postgres database , we have all the tables with audit columns like
>> created_by_user, created_timestamp,updated_by_user, updated_timestamp. So
>> we have these fields that were supposed to be populated by the time at
>> which the insert/update operation happened on the database but not at the
>> application level. So we are planning to populate the created_by_user,
>> created_timestamp columns by setting a default value of "current_timestamp"
>> and "current_user" for the two columns,  but no such this is available to
>> populate while we do the update of the row, so the only option seems to be
>> through a trigger.
>>
>> So wanted to check with the experts here  ,considering the table will be
>> DML heavy table (300M+ transactions will be inserted daily), Is is okay to
>> have the trigger for this table for populating all the audit columns or
>> should we keep default for  created_by_user, created_timestamp and just
>> trigger for the update related two audit column? Basically wanted to see,
>> if the default value does the same thing as a trigger or it does something
>> more optimally than trigger?
>>
>> Regards
>> Yudhi
>>
>
Thank you so much. So do you mean to say that , we should add default
values for the create_timestamp and create_user_id as current_timestamp and
current_user,  but for update_user_id and update_timestamp , we can ask the
application to update the values manually , whenever they are executing the
update statement on the rows?

Reply via email to