Thanks for the info. I've got an index, so I guess it's as good as it
gets!
The data is actually copied over from the slony transaction log table,
and there's no way to know how many statements (=rows) there might be
for any given transaction, so assigning an arbitrary limit seems too
risky, and I
"David Parker" <[EMAIL PROTECTED]> writes:
> I know from the documentation that the FOR implicitly opens a cursor,
> but I'm wondering if there would be any performance advantages to
> explicitly declaring a cursor and moving through it with FETCH commands?
AFAICS it'd be exactly the same. Might
I need to process a
large table a few "chunks" at a time, commiting in between chunks so that
another process can pick up and start processing the data.
I am using a
pl/pgsql procedure with a "FOR rec in Select * from tab order by"
statement. The chunksize is passed in to the procedure