Robert Haas robertmh...@gmail.com writes:
On Sat, Oct 31, 2009 at 5:00 PM, Marko Tiikkaja
marko.tiikk...@cs.helsinki.fi wrote:
What I've had in mind is pipelining the execution only when it doesn't
have *any* impact on the outcome. This would mean only allowing it when
the top-level
Robert Haas wrote:
You'd also have to disallow the case when there are any triggers on
the INSERT, or where there are any triggers on anything else (because
they might access the target table of the INSERT). This will end up
being so restricted as to be useless.
I might be wrong here, but I
Tom Lane wrote:
However, this still doesn't address the problem of what happens when the
top-level select fails to read all of the CTE output (because it has a
LIMIT, or the client doesn't read all the output of a portal, etc etc).
Partially executing an update in such cases is no good.
I've
Marko Tiikkaja marko.tiikk...@cs.helsinki.fi writes:
I've previously thought about making the CTE aware of the LIMIT,
similarly to a top-N sort, but I don't think it's worth it.
That approach doesn't lead to a solution because then you could *never*
optimize it. The protocol-level limit option
On Nov 1, 2009, at 10:12 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Sat, Oct 31, 2009 at 5:00 PM, Marko Tiikkaja
marko.tiikk...@cs.helsinki.fi wrote:
What I've had in mind is pipelining the execution only when it
doesn't
have *any* impact on the
Greg Stark wrote:
On Thu, Oct 29, 2009 at 7:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Pipelined execution would be nice but I really doubt that it's worth
what we'd have to give up to have it. The one-at-a-time behavior will
be simple to understand and reliable to use. Concurrent execution
On Sat, Oct 31, 2009 at 5:00 PM, Marko Tiikkaja
marko.tiikk...@cs.helsinki.fi wrote:
Greg Stark wrote:
On Thu, Oct 29, 2009 at 7:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Pipelined execution would be nice but I really doubt that it's worth
what we'd have to give up to have it. The
Robert Haas robertmh...@gmail.com writes:
To be honest, I'm not entirely comfortable with either behavior.
Pipelining a delete out of one table into an insert into another table
seems VERY useful to me, and I'd like us to have a way to do that. On
the other hand, in more complex cases, the
On Thu, Oct 29, 2009 at 7:17 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Pipelined execution would be nice but I really doubt that it's worth
what we'd have to give up to have it. The one-at-a-time behavior will
be simple to understand and reliable to use. Concurrent execution won't
be either.
I
In http://archives.postgresql.org/message-id/26545.1255140...@sss.pgh.pa.us
I suggested that we should push the actual execution (not just queuing)
of non-deferred AFTER triggers into the new ModifyTable plan node.
The attached patch does that, and seems like a nice improvement since it
removes
Tom Lane wrote:
So, before I go off and do that work: anybody have an objection to this
line of development? The main implication of changing to this approach
is that we'll be nailing down the assumption that each WITH (command
RETURNING) clause acts very much like a separate statement for
Marko Tiikkaja marko.tiikk...@cs.helsinki.fi writes:
Like we've discussed before, WITH (.. RETURNING ..) is probably most
useful for moving rows from one table to another. When you're moving a
lot of rows around, there's some point where I believe this execution
strategy will be a lot slower
On Wed, Oct 28, 2009 at 8:45 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Marko Tiikkaja marko.tiikk...@cs.helsinki.fi writes:
Like we've discussed before, WITH (.. RETURNING ..) is probably most
useful for moving rows from one table to another. When you're moving a
lot of rows around, there's some
13 matches
Mail list logo