Re: [HACKERS] FDW-based dblink (WIP)

2009-08-20 Thread Heikki Linnakangas
Itagaki Takahiro wrote:
 However, automatic transaction management needs help by core. Is it
 acceptable to have two-phase callbacks?

Probably, but it's far too early to decide what the interface should
look like.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Heikki Linnakangas
Jeff Davis wrote:
 I'm going to try to get this patch ready for the 9-15 commitfest. Here
 are a few design updates:
 
 (1) Language:
 
 I think that the new language should be a table constraint, and I think
 there's a consensus on that. The specific language I have in mind is:
 
   CREATE TABLE (
 ...,
 INDEX CONSTRAINT (attname op, ...) USING INDEX indexname
   );

That sounds like the constraint is based on an existing index, but there
can't be any existing indexes on a table that hasn't been created yet.
If this creates the index, then the syntax needs to support specifying
index access method and an opclass for all the columns.

   ALTER TABLE ADD INDEX CONSTRAINT (attname op, ...)
 USING INDEX indexname;
 
 Where op is the constraint operator. For example, if all ops are
 = (or whatever the operator for BTEqualStragey is for that type), that
 would be semantically identical to a UNIQUE constraint.

This makes more sense.

It would be nice to have syntax to create the index and constraint in
one command, so that the constraint can be checked while the index is
being created. Otherwise you need an extra heap scan to check it.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] hot standby - merged up to CVS HEAD

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 1:55 AM, Heikki
Linnakangasheikki.linnakan...@enterprisedb.com wrote:
 When that is replayed, ProcArrayUpdateTransactions() will zap the
 unobserved xids array with the list that includes XID 123, even though
 we already saw a commit record for it.

 I looked at this a little more.  I'm wondering if we can fix this by
 making ProcArrayUpdateRecoveryTransactions() smarter.  Can we just
 refuse to add an Xid to the UnobservedXids() array if in fact we've
 already observed it?  (Not sure how to check that.)

 There's also the opposite problem: If a transaction starts (and writes a
 WAL record) between LogCurrentRunningXacts() and XLogInstrt(), it is not
 present in the RunningXacts record. When the standby replays the
 RunningXacts record, it removes the XID of that transaction from the
 array, even though it's still running.

Yep, Simon appears to have contemplated that problem - see comments in
ProcArrayUpdateRecoveryTransactions().

 Fixing this on the
 master would seem to require acquiring the WALInsertLock before
 calling GetRunningTransactionData() and holding it until we finish
 writing that data to WAL, which I suspect someone's going to complain
 about...

 Yeah, it's hard to get that locking right without risking deadlock. As
 the patch stands, we only write a RunningXacts record once per
 checkpoint, so it's not performance critical, but we must avoid deadlocks.

 If there's a way, I would prefer a solution where the RunningXacts
 snapshot represents the situation the moment it appears in WAL, not some
 moment before it. It makes the logic easier to understand.

I think this is going to be difficult.  At a preliminary look, it
seems to require taking a sledgehammer to the abstraction layer
encapsulated by XLogInsert().  It's also going to require holding both
ProcArrayLock and WALInsertLock simultaneously.  I'm not sure where
the risk of deadlock comes in - we just have to define a rule (or
maintain the existing rule) about which order to acquire those two
locks in. But I'm guessing the existing rule is along the lines of
Don't do that, or Tom Lane will reject your patch and I'll you'll get
is this stupid T-shirt.

http://img199.yfrog.com/i/b9w.jpg/

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Andrew Dunstan



Andrew Dunstan wrote:

Bruce Momjian wrote:

Are we going to publish an XML DTD for EXPLAIN, or have we already?


Not a DTD, but I am working on an XML Schema (DTDs are a bit yesterday).




Here is a RelaxNG spec which people might find a bit easier to read. It 
has been autocreated by a little tool called trang, that I used on a 
very large body of explain output that I produced by mangling the 
regression tests (and, incidentally, crashing the server in the result - 
I still have to chase that up).


I have a couple of questions, however. First, in that long list of 
alternatives for a Plan node, can any of them occur more than once? 
Second, we are using Item as a child of both Output and Sort-Key nodes. 
Are they really describing the same thing? And in any case, Item is a 
wonderfully non-informative name, as is Output, for that matter.


BTW - I know this requires tweaking - those xsd:NCName values will 
probably just become text, for example.


cheers

andrew


default namespace = http://www.postgresql.org/2009/explain;

start =
   element explain {
 element Query {
   Plan,
   element Triggers { empty },
   element Total-Runtime { xsd:decimal }
 }
   }
Plan =
 element Plan {
   (element Actual-Loops { xsd:integer }
| element Actual-Rows { xsd:integer }
| element Actual-Startup-Time { xsd:decimal }
| element Actual-Total-Time { xsd:decimal }
| element Alias { text }
| element Filter { text }
| element Function-Name { xsd:NCName }
| element Hash-Cond { text }
| element Index-Name { xsd:NCName }
| element Join-Filter { text }
| element Join-Type { xsd:NCName }
| element Merge-Cond { text }
| element Node-Type { text }
| element One-Time-Filter { text }
| element Output { Item+ }
| element Parent-Relationship { xsd:NCName }
| element Plan-Rows { xsd:integer }
| element Plan-Width { xsd:integer }
| element Plans { Plan* }
| element Recheck-Cond { text }
| element Relation-Name { xsd:NCName }
| element Scan-Direction { xsd:NCName }
| element Schema { xsd:NCName }
| element Sort-Key { Item+ }
| element Sort-Method { text }
| element Sort-Space-Type { xsd:NCName }
| element Sort-Space-Used { xsd:integer }
| element Startup-Cost { xsd:decimal }
| element Strategy { xsd:NCName }
| element Subplan-Name { text }
| element Total-Cost { xsd:decimal })*,
   element Index-Cond { text }?
 }
Item = element Item { text }


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 9:30 AM, Andrew Dunstanand...@dunslane.net wrote:


 Andrew Dunstan wrote:

 Bruce Momjian wrote:

 Are we going to publish an XML DTD for EXPLAIN, or have we already?

 Not a DTD, but I am working on an XML Schema (DTDs are a bit yesterday).



 Here is a RelaxNG spec which people might find a bit easier to read. It has
 been autocreated by a little tool called trang, that I used on a very large
 body of explain output that I produced by mangling the regression tests

This is definitely easy to read, especially for XML.

 (and, incidentally, crashing the server in the result - I still have to
 chase that up).

Hmm.

 I have a couple of questions, however. First, in that long list of
 alternatives for a Plan node, can any of them occur more than once?

I don't think so.  But I also don't think Index-Cond should be treated
specially, as you have done here.

 Second,
 we are using Item as a child of both Output and Sort-Key nodes. Are they
 really describing the same thing? And in any case, Item is a wonderfully
 non-informative name, as is Output, for that matter.

Well, I can't help Output.  That's what 8.4-EXPLAIN calls it.  I do
think maybe it should be ripped out of EXPLAIN (VERBOSE) and made a
separate option.

Are they really the same thing?  Obviously not.  I just needed a way
to make a list of scalars in XML and I picked that for want of
creativity.

 BTW - I know this requires tweaking - those xsd:NCName values will probably
 just become text, for example.

As far as I'm concerned, you're already way ahead producing something
that fits on the screen.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 I have a couple of questions, however. First, in that long list of 
 alternatives for a Plan node, can any of them occur more than once? 

No.

 Second, we are using Item as a child of both Output and Sort-Key nodes. 
 Are they really describing the same thing? And in any case, Item is a 
 wonderfully non-informative name, as is Output, for that matter.

They are both describing expressions.  I wanted to rename Item as Expr,
if you'll recall.  But I think we should have a concrete plan about
all the tweaks we want to make to the output schema before doing
anything, so I haven't pushed to change it immediately.

I don't see anything wrong with Output --- what else would you call the
output expressions of a node?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Duplicated Keys in PITR

2009-08-20 Thread Ygor Degani
When recover a database using a continuous archive backup, i detected some
duplicated keys. This there isn't in the production database.
I use postgres-8.3.5.
anyone know why this happens?

Regards,

-- 
Ygor Degani


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Jeff Davis
On Thu, 2009-08-20 at 11:47 +0300, Heikki Linnakangas wrote:
 That sounds like the constraint is based on an existing index, but there
 can't be any existing indexes on a table that hasn't been created yet.
 If this creates the index, then the syntax needs to support specifying
 index access method and an opclass for all the columns.

Of course, thanks for pointing that out. To make it work at CREATE TABLE
time, the language would have to specify the index access method, and
the index name should be optional. Do you think it's worthwhile adjust
the syntax for that, or would it just bloat the CREATE TABLE syntax for
no reason?

I'm leaning toward not allowing it at CREATE TABLE time.

 It would be nice to have syntax to create the index and constraint in
 one command, so that the constraint can be checked while the index is
 being created. Otherwise you need an extra heap scan to check it.

I can leave the old syntax in:

CREATE INDEX indexname ON tablename USING method
  (a CONSTRAINT op, b CONSTRAINT op) ...;

and allow both.

However, I'm not sure if it's very easy to provide support for
concurrent index building. Should I block it, or is it worth
investigating further?

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Linux start script updates

2009-08-20 Thread Kevin Grittner
Due to a thread about the neglect of the sample start scripts I took a
look at the current Linux file.  There's certainly room for several
improvements, but some of them might require discussion.  Attached are
a couple small changes which seem to me to be pretty tame.  Hopefully
a small, non-controversial step in the right direction.
 
(1)  It adds an LSB INIT INFO comment block, consistent with the
chkconfig comment block above it.
 
http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/initscrcomconv.html
 
(2)  It doesn't exit with zero for a missing executable unless the
request is stop.  It uses 5, which means program is not installed.
 
http://refspecs.freestandards.org/LSB_3.1.0/LSB-Core-generic/LSB-Core-generic/iniscrptact.html
 
-Kevin


start-linux-1.diff
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Multi-pass planner

2009-08-20 Thread decibel
There have been a number of planner improvement ideas that have been  
thrown out because of the overhead they would add to the planning  
process, specifically for queries that would otherwise be quiet fast.  
Other databases seem to have dealt with this by creating plan caches  
(which might be worth doing for Postgres), but what if we could  
determine when we need a fast planning time vs when it won't matter?


What I'm thinking is that on the first pass through the planner, we  
only estimate things that we can do quickly. If the plan that falls  
out of that is below a certain cost/row threshold, we just run with  
that plan. If not, we go back and do a more detailed estimate.

--
Decibel!, aka Jim C. Nasby, Database Architect  deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Duplicated Keys in PITR

2009-08-20 Thread Ygor Degani
When recover a database using a continuous archive backup, i detected some
duplicated keys. This there isn't in the production database.
I use postgres-8.3.5.
anyone know why this happens?

Regards,

-- 
Ygor Degani


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Andrew Dunstan


I wrote:



Andrew Dunstan wrote:

Bruce Momjian wrote:

Are we going to publish an XML DTD for EXPLAIN, or have we already?


Not a DTD, but I am working on an XML Schema (DTDs are a bit yesterday).




Here is a RelaxNG spec which people might find a bit easier to read. 
It has been autocreated by a little tool called trang, that I used on 
a very large body of explain output that I produced by mangling the 
regression tests (and, incidentally, crashing the server in the result 
- I still have to chase that up).



Updated version with more complete information (regression crash was due 
to my bad script).


cheers

andrew


default namespace = http://www.postgresql.org/2009/explain;

start =
   element explain {
 element Query {
   Plan,
   Triggers,
   element Total-Runtime { xsd:decimal }
  }+
  }
Plan =
 element Plan {
   (element Actual-Loops { xsd:integer }
| element Actual-Rows { xsd:integer }
| element Actual-Startup-Time { xsd:decimal }
| element Actual-Total-Time { xsd:decimal }
| element Alias { text }
| element CTE-Name { text }
| element Command { text }
| element Filter { text }
| element Function-Name { text }
| element Hash-Cond { text }
| element Index-Cond { text }
| element Index-Name { text }
| element Join-Filter { text }
| element Join-Type { text }
| element Merge-Cond { text }
| element Node-Type { text }
| element One-Time-Filter { text }
| element Output { Item* }
| element Parent-Relationship { text }
| element Plan-Rows { xsd:integer }
| element Plan-Width { xsd:integer }
| element Plans { Plan* }
| element Recheck-Cond { text }
| element Relation-Name { text }
| element Scan-Direction { text }
| element Schema { text }
| element Sort-Key { Item+ }
| element Sort-Method { text }
| element Sort-Space-Type { text }
| element Sort-Space-Used { xsd:integer }
| element Startup-Cost { xsd:decimal }
| element Strategy { text }
| element Subplan-Name { text }
| element Total-Cost { xsd:decimal },
| element TID-Cond { text }
)*
 }
Triggers =
 element Triggers {
   element Trigger {
 element Trigger-Name { text },
 element Constraint-Name { text }?,
 element Relation { text },
 element Time { xsd:decimal },
 element Calls { xsd:integer }
   }*
 }
Item = element Item { text }



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Updated version with more complete information (regression crash was due 
 to my bad script).

I took a look through the source code to match it against this.  I found
that you missed a couple of possibilities: we have Notify / and
Utility-Statement / as alternatives to Query just below explain.

Also, it looks to me like Item is simply being used as an element
of lists (cf ExplainPropertyList); I was mistaken to equate it with
Expr.  I don't know XML well enough to understand if we really need
that syntactic detail, or if there's a more idiomatic way to treat
lists.

BTW, I wonder why explain doesn't have an init-cap like every other
node type name ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Duplicated Keys in PITR

2009-08-20 Thread Greg Stark
On Thu, Aug 20, 2009 at 4:13 PM, Ygor Deganiygordeg...@gmail.com wrote:
 When recover a database using a continuous archive backup, i detected some
 duplicated keys. This there isn't in the production database.
 I use postgres-8.3.5.
 anyone know why this happens?

Uhm, well it's not supposed to. Do you have more information?

How are your duplicate keys being generated? Is there a unique index on them?

Are the keys there if you use a sequential scan or only if you use an
index to look them up?

Have you been running 8.3.5 the whole time or were you running older
8.3 releases for some of the time? Why aren't you running 8.3.7 --
there are known bugs in 8.3.5 though none which look to be related.

How long has the archive been running, is it feasible to give someone
the backup and complete archives going back to when it was taken?

Can you dump the pages with the bad keys using the pageinspect contrib
module? Do you need instructions on how to do that?



-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 11:15 AM, decibeldeci...@decibel.org wrote:
 There have been a number of planner improvement ideas that have been thrown
 out because of the overhead they would add to the planning process,
 specifically for queries that would otherwise be quiet fast. Other databases
 seem to have dealt with this by creating plan caches (which might be worth
 doing for Postgres), but what if we could determine when we need a fast
 planning time vs when it won't matter?

 What I'm thinking is that on the first pass through the planner, we only
 estimate things that we can do quickly. If the plan that falls out of that
 is below a certain cost/row threshold, we just run with that plan. If not,
 we go back and do a more detailed estimate.

It's not a dumb idea, but it might be hard to set that threshold
correctly.  You might have some queries that you know take 4 seconds
(and you don't want to spend another 200 ms planning them every time
for no benefit) and other queries that only take 500 ms (but you know
that they can be squashed down to 150 ms with a bit more planning).

I think one of the problems with the planner is that all decisions are
made on the basis of cost.  Honestly, it works amazingly well in a
wide variety of situations, but it can't handle things like we might
as well materialize here, because it doesn't cost much and there's a
big upside if our estimates are off.  The estimates are the world,
and you live and die by them.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Andrew Dunstan



Tom Lane wrote:

Andrew Dunstan and...@dunslane.net writes:
  
Updated version with more complete information (regression crash was due 
to my bad script).



I took a look through the source code to match it against this.  I found
that you missed a couple of possibilities: we have Notify / and
Utility-Statement / as alternatives to Query just below explain.
  



What causes those to happen? Here's how I mangled the regression tests 
to produce the output that this analysis was taken from:


   perl -spi.bak -e 
's/^(insert|update|select|delete|declare|execute|create table .* as) 
/explain (analyse true, verbose true, format xml) $1 /i;' *.sql


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Tom Lane
Andrew Dunstan and...@dunslane.net writes:
 Tom Lane wrote:
 I took a look through the source code to match it against this.  I found
 that you missed a couple of possibilities: we have Notify / and
 Utility-Statement / as alternatives to Query just below explain.

 What causes those to happen?

You can get a Notify via explaining a command that's been affected
by a rule like
CREATE RULE foo ... DO ALSO NOTIFY foo;

I think the Utility-Statement case is not actually reachable code at
present.  NOTIFY is the only utility command that's allowed in CREATE
RULE, and auto-explain is hooked in in a place where it can't see
utility statements at all.  I suppose we could make EXPLAIN throw error
there, instead of printing a node type we'd have to document.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] explain root element for auto-explain

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 12:40 PM, Andrew Dunstanand...@dunslane.net wrote:


 Tom Lane wrote:

 Andrew Dunstan and...@dunslane.net writes:


 Updated version with more complete information (regression crash was due
 to my bad script).


 I took a look through the source code to match it against this.  I found
 that you missed a couple of possibilities: we have Notify / and
 Utility-Statement / as alternatives to Query just below explain.



 What causes those to happen? Here's how I mangled the regression tests to
 produce the output that this analysis was taken from:

   perl -spi.bak -e 's/^(insert|update|select|delete|declare|execute|create
 table .* as) /explain (analyse true, verbose true, format xml) $1 /i;' *.sql

CREATE RULE foo_notify AS ON UPDATE TO foo DO ALSO NOTIFY bob;

I am not sure that there's any way to get any other kind of utility
statement in there; I think that's just a safety valve in case someone
changes the rule mechanism and forgets to update EXPLAIN.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Kevin Grittner
Robert Haas robertmh...@gmail.com wrote:
 
 I think one of the problems with the planner is that all decisions
 are made on the basis of cost.  Honestly, it works amazingly well in
 a wide variety of situations, but it can't handle things like we
 might as well materialize here, because it doesn't cost much and
 there's a big upside if our estimates are off.  The estimates are
 the world, and you live and die by them.
 
[thinking out loud]
 
If there were some reasonable way to come up with a *range* for cost
at each step, a reasonable heuristic might be to cost the plan using
minimum values and maximum values, and use the root mean square of the
two for comparisons to other plans.  I don't know that we have a good
basis to come up with ranges rather than absolute numbers, though.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Robert Haas
On Thu, Aug 20, 2009 at 12:55 PM, Kevin
Grittnerkevin.gritt...@wicourts.gov wrote:
 Robert Haas robertmh...@gmail.com wrote:

 I think one of the problems with the planner is that all decisions
 are made on the basis of cost.  Honestly, it works amazingly well in
 a wide variety of situations, but it can't handle things like we
 might as well materialize here, because it doesn't cost much and
 there's a big upside if our estimates are off.  The estimates are
 the world, and you live and die by them.

 [thinking out loud]

 If there were some reasonable way to come up with a *range* for cost
 at each step, a reasonable heuristic might be to cost the plan using
 minimum values and maximum values, and use the root mean square of the
 two for comparisons to other plans.  I don't know that we have a good
 basis to come up with ranges rather than absolute numbers, though.

Maybe.  The problem is that we have mostly two cases: an estimate that
we think is pretty good based on reasonable statistics (but may be way
off if there are hidden correlations we don't know about), and a wild
guess.  Also, it doesn't tend to matter very much when the estimates
are off by, say, a factor of two.  The real problem is when they are
off by an order of magnitude or more.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Heikki Linnakangas
Jeff Davis wrote:
 On Thu, 2009-08-20 at 11:47 +0300, Heikki Linnakangas wrote:
 That sounds like the constraint is based on an existing index, but there
 can't be any existing indexes on a table that hasn't been created yet.
 If this creates the index, then the syntax needs to support specifying
 index access method and an opclass for all the columns.
 
 Of course, thanks for pointing that out. To make it work at CREATE TABLE
 time, the language would have to specify the index access method, and
 the index name should be optional. Do you think it's worthwhile adjust
 the syntax for that, or would it just bloat the CREATE TABLE syntax for
 no reason?
 
 I'm leaning toward not allowing it at CREATE TABLE time.

Seems reasonable to me too.

 However, I'm not sure if it's very easy to provide support for
 concurrent index building. Should I block it, or is it worth
 investigating further?

Dunno. It sure would be nice, but it's not a showstopper.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Greg Stark
On Thu, Aug 20, 2009 at 6:10 PM, Robert Haasrobertmh...@gmail.com wrote:
 Maybe.  The problem is that we have mostly two cases: an estimate that
 we think is pretty good based on reasonable statistics (but may be way
 off if there are hidden correlations we don't know about), and a wild
 guess.  Also, it doesn't tend to matter very much when the estimates
 are off by, say, a factor of two.  The real problem is when they are
 off by an order of magnitude or more.

One problem is that you can't just take a range of row estimates and
calculate a cost for both endpoints of the estimate range to get a
cost estimate. It's quite possible for more rows to generate a lower
cost (think if you have a NOT IN query).

Another problem is that it's not really helpful to have a range of
costs unless you can actually make use of them to make decisions. The
planner doesn't come up with multiple independent complete plans and
then pick the one with the cheapest cost. It has to make some
decisions along the way to avoid exponential growth. Those decisions
might have a tightly constrained cost but cause higher nodes to have
very wide cost ranges (think of deciding not to materialize something
which later gets put on the outer side of a nested loop). But there's
no way to know at the time that they'll be critical to avoiding that
risky plan later.

I don't think it's a bad idea, I just think you have to set your
expectations pretty low. If the estimates are bad there isn't really
any plan that will be guaranteed to run quickly.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Greg Stark
On Thu, Aug 20, 2009 at 6:28 PM, Greg Starkgsst...@mit.edu wrote:
 I don't think it's a bad idea, I just think you have to set your
 expectations pretty low. If the estimates are bad there isn't really
 any plan that will be guaranteed to run quickly.

Actually this is usually Tom's point when this topic comes up. Say
you're deciding between an index scan and a sequential scan. The
sequential scan has a total cost of 1000..1000 but the index scan has
an estimated total cost of 1..1. If you pick the sequential scan
you might be running 1000x slower than the index scan in the worst
case. But if you pick the index scan you might be running 10x slower
than the sequential scan in the worst case. If you don't trust the
estimate where does that leave you? Making a mistake either way is
fatal.


-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Kevin Grittner
Greg Stark gsst...@mit.edu wrote:
 
 Say you're deciding between an index scan and a sequential scan. The
 sequential scan has a total cost of 1000..1000 but the index scan
 has an estimated total cost of 1..1.
 
My proposal was to use RMS, which would effectively favor lower worst
case behavior.  Specifically, if the estimated cost range is
1000..1000 you get sqrt((1000*1000+1000*1000)/2) = 1000, while
1..1 yields sqrt((1*1+1*1)/2) = 7071.067847.  So with this
heuristic it would prefer the sequential scan.
 
Of course, for these purposes, you would get the same choices by
leaving off the division and square root calculation, so it could
simplify to choosing the lower of 1000*1000+1000*1000 versus
1*1+1*1.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] converting between netmask formats

2009-08-20 Thread Robert Haas
I'm writing some code now that uses the inet and cidr types and I find
myself wanting a function that takes an argument of type inet and
returns an integer, where:

255.255.255.255 = 32
255.255.255.254 = 31
255.255.255.252 = 30
255.255.255.248 = 29
255.255.255.240 = 28
...
128.0.0.0 = 1
0.0.0.0 = 0

...and returning NULL for all other IPv4 IPs.  We have the netmask()
function that performs this computation in the other direction, but in
this case I am trying to go the other way.

Anyone else think this would be a useful addition?  Anyone have a good
idea for a name?  parse_netmask()?

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-08-20 Thread Petr Jelinek

Tom Lane napsal(a):

Peter Eisentraut pete...@gmx.net writes:

Well, I don't know if we really need to call it lambda, but I fully
expect to be able to use these ad hoc functions as part of other
expressions.


Why would you expect that?  To be used in an expression, you'd also need
decoration to tell the function argument types, result type, volatility
properties, etc etc (your proposed lambda notation is far too
simplistic).  I think you're moving the goalposts to a point where we'd
need ANOTHER, simpler, mechanism to accomplish the original intent.
And frankly, all of the user demand I've heard is for the latter not
the former.  By the time you get into specifying function properties
you might as well just create a function.



I agree with Tom here, doing it the way Andrew and Tom agreed on will be 
*way* easier and will give us most of the benefit (as Heikki said 90% 
of the usability with 10% of the trouble).


I volunteer to do this feature too.

The implementation as I see it would create function in pg_temp 
namespace, call it and then drop it. Any other implementation would imho 
mean rewriting procedure language api.


I am unsure if we should try to make the name of the function unique, 
since it should not collide with anything if we allow just one statement 
at a time (transactional DDL wins again), or am I mistaken here ?


Also do we want the LANGUAGE option to be at start or at the end or 
anywhere (like it's in CREATE FUNCTION). The reason I am asking this is 
that if we let user to put it on both sides then the LANGUAGE keyword 
can't be optional (what Dimitri Fontaine wanted).


And last thing I am wondering is if we want to allow DO to return rows 
(probably by creating the function with SETOF record as return type) ?
I am guessing not here since if user wants to run something often then 
he should crate a function.


Otherwise this should be quite straightforward (I have working code 
already).



--
Regards
Petr Jelinek (PJMODOS)

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GRANT ON ALL IN schema

2009-08-20 Thread Alvaro Herrera
Petr Jelinek wrote:

 The implementation as I see it would create function in pg_temp
 namespace, call it and then drop it. Any other implementation would
 imho mean rewriting procedure language api.

That's really ugly.  It'll cause catalog bloat with every execution.
I think it would be acceptable to have a new column in pg_language that
pointed to an anonymous block execute function.  Languages that do not
define this function cannot use this new feature.

BTW I think you should start a new thread for this proposal.  It has
diverged a bit from GRANT ON ALL.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Duplicated Keys in PITR

2009-08-20 Thread Ygor Degani
 How are your duplicate keys being generated? Is there a unique index on
them?
They are generated during recover. Yes, there is a unique index.

 Are the keys there if you use a sequential scan or only if you use an
 index to look them up?
I used a sequential scan.

Have you been running 8.3.5 the whole time or were you running older
8.3 releases for some of the time? Why aren't you running 8.3.7 --
there are known bugs in 8.3.5 though none which look to be related.
I can migrate for version 8.3.7, but it will fix my problem?

How long has the archive been running, is it feasible to give someone
the backup and complete archives going back to when it was taken?
At least 6 months. This database has 270GB. Soon, it is impossible to use
pg_dump, because the restore takes around 22 hours to finish.

Can you dump the pages with the bad keys using the pageinspect contrib
module? Do you need instructions on how to do that?
Yes. I do.


Thanks,


Ygor Degani

2009/8/20 Greg Stark gsst...@mit.edu

 On Thu, Aug 20, 2009 at 4:13 PM, Ygor Deganiygordeg...@gmail.com wrote:
  When recover a database using a continuous archive backup, i detected
 some
  duplicated keys. This there isn't in the production database.
  I use postgres-8.3.5.
  anyone know why this happens?

 Uhm, well it's not supposed to. Do you have more information?

 How are your duplicate keys being generated? Is there a unique index on
 them?

 Are the keys there if you use a sequential scan or only if you use an
 index to look them up?

 Have you been running 8.3.5 the whole time or were you running older
 8.3 releases for some of the time? Why aren't you running 8.3.7 --
 there are known bugs in 8.3.5 though none which look to be related.

 How long has the archive been running, is it feasible to give someone
 the backup and complete archives going back to when it was taken?

 Can you dump the pages with the bad keys using the pageinspect contrib
 module? Do you need instructions on how to do that?



 --
 greg
 http://mit.edu/~gsstark/resume.pdf http://mit.edu/%7Egsstark/resume.pdf




-- 
Ygor Degani


[HACKERS] UPDATE ... SET (a, b, c) = (expr)

2009-08-20 Thread Alvaro Herrera
Hi,

I've been having a look around to see what would it take to implement
the syntax in the subject, completing a patch posted by Sussane Ebrecht
two years ago.

As far as I can tell, the first problem is in the grammar.  The SQL spec
mandates multiple value expression as a production for the expression
at the right of the equal sign.  This enters a *large* chain of
productions that can end up in a lot of things, and for which I don't
think we have a production in our grammar.  In fact I think we have
special cases for many of the things that can be derived from here, so
maybe some things will have to be reworked in order to support all the
stuff that the standard mandates.  I'll refrain from doing this
initially, and will focus on getting the simpler cases working.

But then there's the matter of how to represent it from the grammar
inwards.  At first I thought of creating a new parse node type, say
MultiResTarget that would replace ResTarget in UpdateStmt, and get it
from there into the optimizer and executor.  However, this seems to
require creating a new node for TargetEntry too, and duplicate logic for
all of that.  As far as transformTargetList is concerned this wouldn't
be much of a problem, but from there onwards it's pretty complex -- it
requires a lot of duplicate logic.  In fact I think it would cause the
current code which is common for UPDATE/INSERT/SELECT/DELETE to split in
two, one for UPDATE and one for the rest, which looks ungood.

So is this the right way to approach the problem, or am I missing some
simpler way?

Perhaps, for example, I should be treating normal updates as currently,
and add a new member to UpdateStmt that would hold multi-column
expressions.  But I'm not sure if that would work at all.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE ... SET (a, b, c) = (expr)

2009-08-20 Thread Alvaro Herrera
Alvaro Herrera wrote:

 But then there's the matter of how to represent it from the grammar
 inwards.  At first I thought of creating a new parse node type, say
 MultiResTarget that would replace ResTarget in UpdateStmt, and get it
 from there into the optimizer and executor.

Hmm, I should be passing it as a Query node, right?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] converting between netmask formats

2009-08-20 Thread Josh Berkus

 Anyone else think this would be a useful addition?  Anyone have a good
 idea for a name?  parse_netmask()?

netmask_bits()


-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Mischa Sandberg
In a federated database engine I built in the mid-90's,
it more or less ran both plans in parallel, to implement fast-first and 
min-total cost.
The index query in general started returning rows whose oids went into a filter
that discarded them from the serial query once it started to crank things out.

'index' and 'serial' is not exactly what was going on;
the federated engine was joining multiple tables across multiple (sometimes 
hundreds)
of databases, with really variable transmission times, and tolerance for 
timed-out db servers.
It had no reliable way to get cost estimates in advance,
since it didn't have access to statistical metadata (no, Postgres wasn't one
of the databases I had to take as input, more's the pity).

'Parallel' is also not quite accurate. It retained plans (queries were heavily 
repeated)
and did mickey-mouse simulated annealing of past performance, so that if one of 
two plans
was the faster 90% of the time, then there was a 10% probability it would run 
both plans in parallel,
just to check whether something had changed.

The code was part of a proprietary product, and was used by Acxiom and Cisco.
But the concept was pretty simple and as described above.

 -Original Message-
 From: pgsql-hackers-ow...@postgresql.org
 [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Greg Stark
 Sent: Thursday, August 20, 2009 10:32 AM
 To: Robert Haas
 Cc: Kevin Grittner; decibel; Pg Hackers
 Subject: Re: [HACKERS] Multi-pass planner

 On Thu, Aug 20, 2009 at 6:28 PM, Greg Starkgsst...@mit.edu wrote:
  I don't think it's a bad idea, I just think you have to set your
  expectations pretty low. If the estimates are bad there
 isn't really
  any plan that will be guaranteed to run quickly.

 Actually this is usually Tom's point when this topic comes
 up. Say you're deciding between an index scan and a
 sequential scan. The sequential scan has a total cost of
 1000..1000 but the index scan has an estimated total cost of
 1..1. If you pick the sequential scan you might be
 running 1000x slower than the index scan in the worst case.
 But if you pick the index scan you might be running 10x
 slower than the sequential scan in the worst case. If you
 don't trust the estimate where does that leave you? Making a
 mistake either way is fatal.


 --
 greg
 http://mit.edu/~gsstark/resume.pdf

 --
 Sent via pgsql-hackers mailing list
 (pgsql-hackers@postgresql.org) To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Brendan Jurd
2009/8/21 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
 Jeff Davis wrote:
 I'm leaning toward not allowing it at CREATE TABLE time.

 Seems reasonable to me too.


+1

There are plenty of other things to do with tables that you can't mix
directly into a CREATE TABLE statement (grant permissions, create
triggers, change owner, to name a few) so this would not be a surprise
-- or a hardship -- for users IMO.

As an aside, Jeff, have you considered how this feature would interact
with CREATE TABLE ... LIKE parent_table [ { INCLUDING | EXCLUDING } {
DEFAULTS | CONSTRAINTS | INDEXES } ] ... }?  What if someone asks to
include indexes but not constraints?  Vice-versa?  Will these cases be
handled gracefully?

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Jeff Davis
On Fri, 2009-08-21 at 11:14 +1000, Brendan Jurd wrote:
 As an aside, Jeff, have you considered how this feature would interact
 with CREATE TABLE ... LIKE parent_table [ { INCLUDING | EXCLUDING } {
 DEFAULTS | CONSTRAINTS | INDEXES } ] ... }?  What if someone asks to
 include indexes but not constraints?  Vice-versa?  Will these cases be
 handled gracefully?

I hadn't considered that yet, thanks for bringing it to my attention.

From the docs on CREATE TABLE (... LIKE ...):

Not-null constraints are always copied to the new table. CHECK
constraints will only be copied if INCLUDING CONSTRAINTS is specified;
other types of constraints will never be copied.

If they include constraints and not indexes, nothing special.

If they include indexes and not constraints, I think we should follow
the same policy as unique constraints, and create the index and the
constraint.

The behavior seems a little strange to me, but that's the current
behavior for unique indexes.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Brendan Jurd
2009/8/21 Jeff Davis pg...@j-davis.com:
 If they include indexes and not constraints, I think we should follow
 the same policy as unique constraints, and create the index and the
 constraint.

 The behavior seems a little strange to me, but that's the current
 behavior for unique indexes.

This may be an opportunity to fix it.

The current behaviour seems to be predicated on the unique constraint
being an integral part of the index itself.  While this might be true
from a system catalog point of view (pg_index.indisunique), if a user
says that they want to copy a table's structure INCLUDING INDEXES
EXCLUDING CONSTRAINTS then IMO they've made their intention perfectly
clear.  They'd expect it to create an index sans the unique
constraint.  Ignoring the user's intention and copying the index as-is
(including the unique constraint) would be unfriendly.

Unless the SQL spec demands that we do so?

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Greg Stark
On Fri, Aug 21, 2009 at 3:23 AM, Brendan Jurddire...@gmail.com wrote:
 They'd expect it to create an index sans the unique
 constraint.  Ignoring the user's intention and copying the index as-is
 (including the unique constraint) would be unfriendly.

 Unless the SQL spec demands that we do so?

There are no indexes in the SQL spec.

-- 
greg
http://mit.edu/~gsstark/resume.pdf

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Jeff Davis
On Fri, 2009-08-21 at 12:23 +1000, Brendan Jurd wrote:
 This may be an opportunity to fix it.
 
 The current behaviour seems to be predicated on the unique constraint
 being an integral part of the index itself.  While this might be true
 from a system catalog point of view (pg_index.indisunique), if a user
 says that they want to copy a table's structure INCLUDING INDEXES
 EXCLUDING CONSTRAINTS then IMO they've made their intention perfectly
 clear.  They'd expect it to create an index sans the unique
 constraint.  Ignoring the user's intention and copying the index as-is
 (including the unique constraint) would be unfriendly.

I don't have strong feelings either way. I think that's probably a
separate patch, and a fairly small patch.

Using the principle of least surprise, if a user specified one of
INDEXES or CONSTRAINTS, but not both, and there is a unique index, we
should raise an ERROR (or at least a WARNING).

There is not much of a problem with backwards compatibility. LIKE is
shorthand (not stored in catalogs), so it doesn't affect
pg_dump/restore. And hopefully there aren't a lot of apps out there
creating tables dynamically using the LIKE syntax.

Regards,
Jeff Davis


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Multi-pass planner

2009-08-20 Thread Josh Berkus

 I don't think it's a bad idea, I just think you have to set your
 expectations pretty low. If the estimates are bad there isn't really
 any plan that will be guaranteed to run quickly.

Well, the way to do this is via a risk-confidence system.  That is, each
operation has a level of risk assigned to it; that is, the cost
multiplier if the estimates are wrong.  And each estimate has a level of
confidence attached.  Then you can divide the risk by the confidence,
and if it exceeds a certain level, you pick another plan which has a
lower risk/confidence level.

However, the amount of extra calculations required for even a simple
query are kind of frightning.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: generalized index constraints

2009-08-20 Thread Brendan Jurd
2009/8/21 Jeff Davis pg...@j-davis.com:
 On Fri, 2009-08-21 at 12:23 +1000, Brendan Jurd wrote:
 The current behaviour seems to be predicated on the unique constraint
 being an integral part of the index itself.  While this might be true
 from a system catalog point of view (pg_index.indisunique), if a user
 says that they want to copy a table's structure INCLUDING INDEXES
 EXCLUDING CONSTRAINTS then IMO they've made their intention perfectly
 clear.  They'd expect it to create an index sans the unique
 constraint.  Ignoring the user's intention and copying the index as-is
 (including the unique constraint) would be unfriendly.

 I don't have strong feelings either way. I think that's probably a
 separate patch, and a fairly small patch.


Yeah, as I was writing the above I was thinking that it might end up a
separate patch.  However I was also concerned that it might be less
disruptive if we implement your patch with the less-astonishing
behaviour and fix the unique index case in passing, than to commit
your patch with the bad behavior and then fix both.

Up to you.

 Using the principle of least surprise, if a user specified one of
 INDEXES or CONSTRAINTS, but not both, and there is a unique index, we
 should raise an ERROR (or at least a WARNING).

Actually for what it's worth I would expect INCLUDING INDEXES (with no
mention of what to do about constraints) to go ahead and include
constraints on indexes.  I only have strong feelings where the user
has gone to the trouble of explicitly stating that they want indexes
but *not* constraints and we include the constraints anyway.

I would be fine with a NOTICE in the former case, so something like
this would be cool

# CREATE TABLE foo (LIKE bar INCLUDING INDEXES);
NOTICE: INCLUDING INDEXES will also include any constraints on those indexes.
HINT: Specify EXCLUDING CONSTRAINTS to omit them.

To my mind the severity is similar to such notices as NOTICE: CREATE
TABLE / UNIQUE will create implicit index 

i.e., this is probably what you wanted us to do, but just in case you
weren't expecting this side-effect, we're letting you know about it.


 There is not much of a problem with backwards compatibility. LIKE is
 shorthand (not stored in catalogs), so it doesn't affect
 pg_dump/restore. And hopefully there aren't a lot of apps out there
 creating tables dynamically using the LIKE syntax.


Well, it wouldn't surprise me if people are using LIKE to produce
tables for partitioning arrangements.

Cheers,
BJ

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE ... SET (a, b, c) = (expr)

2009-08-20 Thread Nikhil Sontakke
Hi Alvaro,


 But then there's the matter of how to represent it from the grammar
 inwards.  At first I thought of creating a new parse node type, say
 MultiResTarget that would replace ResTarget in UpdateStmt, and get it
 from there into the optimizer and executor.

 Hmm, I should be passing it as a Query node, right?


I had (rather unsuccessfully) tried my hands on this same TODO a while
back. Here is what Tom had to say about my approach:

http://www.nabble.com/UPDATE-using-sub-selects-td9935249.html#a16108043

Also attached is the discussion on hackers to provide some more context:
http://www.nabble.com/UPDATE-using-sub-selects-td9489091.html

 Might not be a big help, but just wanted to mention it here..

Regards,
Nikhils
-- 
http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers