On 02/13/2018 01:25 PM, armand pirvu wrote:

On Feb 13, 2018, at 1:22 PM, Adrian Klaver <adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com>> wrote:

Not a trigger , but the idea is we will do some batch processing from said table let’s name it testtbl

1 - we get the records using  select for update with a limit 100 for example
2 - update each record using using cursor
3 - print the cursor content so that way I have an idea what was updated

I was thinking that if I can put a unique constraint on the table, I can generate a global table in the function , update main table from global table and return select from global table

Not entirely sure I know what you are trying to accomplish, still:

1) Not sure you need to use cursor, see here for less complicated way:





Keeping mind:

"Rather than executing a whole query at once, it is possible to set up a cursor that encapsulates the query, and then read the query result a few rows at a time. One reason for doing this is to avoid memory overrun when the result contains a large number of rows. (However, PL/pgSQL users do not normally need to worry about that, since FOR loops automatically use a cursor internally to avoid memory problems.) A more interesting usage is to return a reference to a cursor that a function has created, allowing the caller to read the rows. This provides an efficient way to return large row sets from functions."

So if you are keeping the rows to 100 a FOR loop would seem to suffice.

2) By global table do you mean a temporary table? If so not sure that is going to work as I am pretty sure it will disappear after the function is run. I could see having a permanent table that you INSERT the updated rows into with a timestamp. Then you could update the main table from that and prune old records using the timestamps.

I can see the developer desire to use cursors to minimize some effort on his side



Adrian Klaver

Reply via email to