> Now, I agree this is somewhat more limited than I hoped for, but OTOH it > still solves the issue I initially aimed for (processing large query > results in efficient way).
I don't quite understand this part. We already send results to the client in a stream unless it's something we have to materialize, in which case a cursor won't help anyway. If the client wants to fetch in chunks it can use a portal and limited size fetches. That shouldn't (?) be parallel-unsafe, since nothing else can happen in the middle anyway. But in most cases the client can just fetch all, and let the socket buffering take care of things, reading results only when it wants them, and letting the server block when the windows are full. That's not to say that SQL-level cursor support wouldn't be nice. I'm just trying to better understand what it's solving. -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (email@example.com) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers