On 7/16/10 7:15 PM +0300, Hitoshi Harada wrote:
2010/7/17 Marko Tiikkaja<marko.tiikk...@cs.helsinki.fi>:
I thought about this, but I don't necessarily like the idea of overloading
executor nodes.

Neither do I have good shape for this solution. Maybe it's not good
idea. But my concern is adding DtScanNode, which looks similar to
MaterialNode. Of course each purpose is different, but quite big part
will overlap each other, I think.

The way I see it is that reading from a tuplestore is so simple that we shouldn't be trying to merge together nodes just on that basis. It seems to me that we'd have to add CteScan and WorkTableScan nodes there too and at that point it would become complicated.

I didn't look at this because I thought using a "tuplestore receiver" in the
portal logic was simple enough.  Any thoughts on how this would work?

It's just deconstructing queries like:

WITH t AS (INSERT INTO x ... RETURING *)
SELECT * FROM t;

to

CREATE TEMP TABLE t AS INSERT INTO x ... RETURING *;
SELECT * FROM t;

That's an idea. Can we somehow avoid name clashes with user-defined temporary tables?

Another concern is tuplestore's memory exhausting. Tuplestore holds
tuples in memory as far as the estimated memory usage is within
work_mem (for *each* not total of all tuplestores!), but if you create
dozens of tuplestore (and it's quite possible in wCTE use cases) we
will possibly fail into memory overflow problems.

That doesn't seem very different from a big SELECT query, except with wCTEs, you actually *know* how many times the work_mem can be used before you run the query and can adjust work_mem accordingly.

That said, I personally could live with a separate GUC for just adjusting the work_mem of "wcte tuplestores". Another option would be to unconditionally force the tuplestores to disk, but that sounds painful.


Regards,
Marko Tiikkaja

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to