Steve Holdoway wrote:
Yes, I know it's not possible, but can anyone suggest an alternative for this problem?

I've written a very simple trigger-driven replication system, which works in stages. First the trigger generates an entry in a log table which is a fully formatted sql command... insert into/delete from, etc. Secondly, this table is transferred to the receiving database, and cleared down. This all works fine.

On the receiving end, there is a cron job that processes all of the commands in this table. However, this is written as a plpgsql function, so it's 'all or nothing'... ie any errors in the data mean that all successful updates preceeding this error are rolled back. This makes finding and debugging the data errors extremely difficult, but, more importantly, stops the update process cold.

I have tried calling a child function from the parent to perform the update in batches, but it still exhibits the same 'all or nothing' functionality.

Can anyone suggest a way that I can get around this?

The easisest way is probably let your cron job be a small client program using one of the available interfaces and call a plsql function from there, once for each batch. Each call followed by a commit. This approach will give you full control, both with respect to transactions and logging/debugging.

Kind regards,

Thomas Hallgren

---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to