"Andrew Dunstan" <[EMAIL PROTECTED]> writes:
> Yes, more or less. That's what made me think of it.
> OTOH, before we rush out and do it someone needs to show that it's a net win.
> agree with Tom that making tuplestore faster would probably be a much better
> investment of time.
I don't think the problem with the tuplestore is a matter of speed. It's a
matter of scalability and flexibility. It limits the types of applications
that can use SRFs and the amount of data they can manipulate before it becomes
Consider applications like dblink that have SRFs that read data from a slow
network sources. Or that generate more data than the server can actually store
at any one time. Or that overflow work_mem but are used in queries that could
return quickly based on the first few records.
Unfortunately, I don't think there's a simple fix that'll work for all PLs
using the current interface. Even languages with iterators themselves (python,
I think) probably don't expect to be called externally while an iterator is in
It seems to me the way to fix it is to abandon the iterator style interface
and an interface that allows you to implement a SRF by providing a function
that returns just the "next" record. It would have to save enough state for
the next iteration explicitly in a data structure rather than being able to
depend on the entire program state being restored.
You could argue you could already do this using a non-SRF but there are two
problems: 1) there's no convenient way to stash the state anywhere and 2) it
wouldn't be convenient to use in SQL in FROM clauses the way SRFs are.
IIRC there is already a hack in at least one of the PLs to stash data in a
place where you can access it in repeated invocations. It doesn't work
correctly if you call your function from two different places from a query. It
would take executor support for such state data structures to fix that
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster