The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes
sense.
I have done another test.
If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the
PostgreSQL server, reported in its log.
If I execute `FETCH 500 FROM cursor` (exactly half the rows), I see a
Thanks, Tom.
Wouldn't this mean that cursors are noticeably non-optimal even for normal
data sizes, since the entire data to be streamed from the table is always
duplicated into another buffer and then streamed?
> if you want the whole query result at once, why are you bothering with a
cursor?
> Seems odd. Is your cursor just on "SELECT * FROM table", or is there
> some processing in there you're not mentioning? Maybe it's a cursor
> WITH HOLD and you're exiting the source transaction?
Hi Tom,
I've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.
New TL;DR (I'm
Dear Tom,
This is very helpful, thank you.
You make a very useful point that the limitation is basically on PL/pgSQL
and other PL languages. And someone on SO already pointed out that an
inline SQL function with a enormous sized TABLE return value also doesn't
have any buffering problems. So
I meant to say: "the `FETCH 1 FROM cursor` until exhausted pattern will
always be safe". Nasty typo, sorry!
**Short version of my question:**
If I hold a cursor reference to an astronomically huge result set in my
client code, would it be ridiculous (i.e. completely defeats the point of
cursors) to issue "FETCH ALL FROM cursorname" as my next command? Or would
this slowly stream the data back to me as
I asked the same question at the same time on Stack Overflow (sincere
apologies if this is a breach of etiquette - I really needed an answer, and
I thought the two communities might not overlap).
Stackoverflow now has an answer, by me:
http://stackoverflow.com/q/42292341/#42297234 - which is