Re: [HACKERS] pg_audit documentation fixes

2015-05-17 Thread Magnus Hagander
On Sun, May 17, 2015 at 6:30 AM, Peter Geoghegan p...@heroku.com wrote:

 Attached patch makes minor tweaks to pg_audit docs.


Applied, thanks.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Petr Jelinek

On 17/05/15 16:04, Andrew Dunstan wrote:


On 05/16/2015 10:56 PM, Peter Geoghegan wrote:

Another thing that I noticed about the new jsonb stuff is that the
concatenate operator is based on the hstore one. This works as
expected:

postgres=# select '{a:1}'::jsonb || '{a:2}';
?column?
--
  {a: 2}
(1 row)

However, the nesting doesn't match up -- containers are not merged
beyond the least-nested level:

postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
nested:2}}';
  ?column?
---
  {a: {also nested: 2}}
(1 row)

This feels wrong to me. When jsonb was initially introduced, we took
inspiration for the *containment* (operator @ jsonb) semantics from
hstore, but since jsonb is nested it worked in a nested fashion. At
the top level and with no nested containers there was no real
difference, but we had to consider the behavior of more nested levels
carefully (the containment operator is clearly the most important
jsonb operator). I had envisaged that with the concatenation of jsonb,
concatenation would similarly behave in a nested fashion. Under this
scheme, the above query would perform nested concatenation as follows:

postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
nested:2}}'; -- does not match actual current behavior
  ?column?
---
  {a: {nested:1, also nested: 2}}
(1 row)

Now, I think it's good that the minus operator (operator - text and
friends) discussed on the nearby thread accepts a text (or int)
argument and remove string elements/pairs at the top level only. This
works exactly the same as existence (I happen to think that removing
elements/pairs at a nested level is likely to be more trouble than
it's worth, and so I don't really like the new jsonb - text[]
operator much, because it accepts a Postgres (not JSON) array of texts
that constitute a path, which feels odd). So I have no issue with at
least the plain minus operators' semantics. But I think that the
concatenate operator's current semantics are significantly less useful
than they could be, and are not consistent with the overall design of
jsonb.



Historical note: I think it's based on the nested hstore work, not on
current hstore, but Dmitry can answer on that.

I didn't dismiss this because it was a bad idea, but because it was too
late in the process. If there is a consensus that we need to address
this now then I'm happy to reopen that, but given the recent amount of
angst about process I'm certainly not going to make such a decision
unilaterally.

Personally, I think there is plenty of room for both operations, and I
can see use cases for both. If I were designing I'd leave || as it is
now and add a + operation to do a recursive merge. I'm not sure how much
work that would be. Not huge but not trivial either.



Agreed, if you look at jquery for example, the extend() method by 
default behaves like our current || and you have to specify that you 
want deep merge if you want the behavior described by Peter. So there is 
definitely point for both, at this time we just support only one of 
them, that's all.



--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upper planner path-ification

2015-05-17 Thread Andrew Gierth
 Robert == Robert Haas robertmh...@gmail.com writes:

 Robert I think grouping_planner() is badly in need of some refactoring
 Robert just to make it shorter.  It's over 1000 lines of code, which
 Robert IMHO is a fairly ridiculous length for a single function.

If there's interest, we could do that specific task as part of adding
hashagg support for grouping sets (which would otherwise make it even
longer), or as preparatory work for that.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upper planner path-ification

2015-05-17 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 Robert == Robert Haas robertmh...@gmail.com writes:
  Robert I think grouping_planner() is badly in need of some refactoring
  Robert just to make it shorter.  It's over 1000 lines of code, which
  Robert IMHO is a fairly ridiculous length for a single function.

 If there's interest, we could do that specific task as part of adding
 hashagg support for grouping sets (which would otherwise make it even
 longer), or as preparatory work for that.

I think that refactoring without changing anything about the way it works
will be painful and probably ultimately a dead end.  As an example, the
current_pathkeys local variable is state that's needed throughout that
process, so you'd need some messy notation to pass it around (unless you
stuck it into PlannerRoot, which would be ugly too).  But that would
go away in a path-ified universe, because each Path would be marked as
to its output sort order.  More, a lot of the code that you'd be
relocating is code that we should be trying to get rid of altogether,
not just relocate (to wit all the stuff that does cost-based comparisons
of alternatives).

So I'm all for refactoring, but I think it will happen as a natural
byproduct of path-ification, and otherwise would be rather forced.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upper planner path-ification

2015-05-17 Thread Andrew Gierth
 Tom == Tom Lane t...@sss.pgh.pa.us writes:

  If there's interest, we could do that specific task as part of
  adding hashagg support for grouping sets (which would otherwise make
  it even longer), or as preparatory work for that.

 Tom I think that refactoring without changing anything about the way
 Tom it works will be painful and probably ultimately a dead end.  As
 Tom an example, the current_pathkeys local variable is state that's
 Tom needed throughout that process, so you'd need some messy notation
 Tom to pass it around (unless you stuck it into PlannerRoot, which
 Tom would be ugly too).  But that would go away in a path-ified
 Tom universe, because each Path would be marked as to its output sort
 Tom order.  More, a lot of the code that you'd be relocating is code
 Tom that we should be trying to get rid of altogether, not just
 Tom relocate (to wit all the stuff that does cost-based comparisons of
 Tom alternatives).

 Tom So I'm all for refactoring, but I think it will happen as a natural
 Tom byproduct of path-ification, and otherwise would be rather forced.

Hrm, ok. So for the near future, we should leave it more or less as-is?
We don't have a timescale yet, but it's our intention to submit a
hashagg support patch for grouping sets as soon as time permits.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Non-user-resettable SET SESSION AUTHORISATION

2015-05-17 Thread Tom Lane
=?windows-1252?Q?Jos=E9_Luis_Tall=F3n?= jltal...@adv-solutions.net writes:
 On the other hand, ISTM that what we all intend to achieve is some 
 Postgres equivalent of the SUID bit... so why not just do something 
 equivalent?
 ---
  LOGIN-- as user with the appropriate role membership / privilege?
  ...
  SET ROLE / SET SESSION AUTHORIZATION WITH COOKIE / IMPERSONATE

  ... do whatever ...-- unprivileged user can NOT do the 
 impersonate thing

  DISCARD ALL-- implicitly restore previous authz
 ---

Oh?  What stops the unprivileged user from doing DISCARD ALL?

I think if we have something like this, it has to be non-resettable
period: you can't get back the old session ID except by reconnecting
and re-authorizing.  Otherwise there's just too much risk of security
holes.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] upper planner path-ification

2015-05-17 Thread Tom Lane
Andrew Gierth and...@tao11.riddles.org.uk writes:
 Tom == Tom Lane t...@sss.pgh.pa.us writes:
  Tom So I'm all for refactoring, but I think it will happen as a natural
  Tom byproduct of path-ification, and otherwise would be rather forced.

 Hrm, ok. So for the near future, we should leave it more or less as-is?
 We don't have a timescale yet, but it's our intention to submit a
 hashagg support patch for grouping sets as soon as time permits.

Well, mumble.  I keep saying that I want to tackle path-ification in
that area, and I keep not finding the time to actually do it.  So I'm
hesitant to tell you that you should wait on it.  But certainly I think
that it'll be a lot easier to get hashagg costing done in that framework
than in what currently exists.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Problems with question marks in operators (JDBC, ECPG, ...)

2015-05-17 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


Andrew Dunstan wrote:
 FTR, Perl's DBD::Pg lets you do this:
 $dbh-{pg_placeholder_dollaronly} = 1; # disable ? placeholders

You can also simply escape placeholders in DBD::Pg with a backslash:

$dbh-prepare(q{SELECT * FROM mytable WHERE lseg1 \?# lseg2 AND name = ?});

Dave Cramer wrote:
 Well our solution was to use ?? but that does mean we have to do some
 extra parsing which in a perfect world wouldn't be necessary.

That's not a good solution as '??' is a perfectly valid operator. ISTR 
seeing it used somewhere in the wild, but I could be wrong.

 In that case my vote is new operators. This has been a sore point for the
 JDBC driver

Um, no, new operators is a bad idea. Question marks are used by hstore, 
json, geometry, and who knows what else. I think the onus is solely on 
JDBC to solve this problem. DBD::Pg solved it in 2008 with 
the pg_placeholder_dollaronly solution, and earlier this year by allowing 
backslashes before the question mark (because other parts of the stack were 
not able to smoothly implement pg_placeholder_dollaronly.) I recommend 
all drivers implement \? as a semi-standard workaround.

See also:
http://blog.endpoint.com/2015/01/dbdpg-escaping-placeholders-with.html

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201505171212
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlVYvmQACgkQvJuQZxSWSsj8SwCdEL3f0JvSlVQERpn+KJIaILzj
GqAAni9qcZ8PLixSLmGoXEQr8tnVZ2RI
=YJfa
-END PGP SIGNATURE-




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Non-user-resettable SET SESSION AUTHORISATION

2015-05-17 Thread José Luis Tallón

On 05/13/2015 06:03 AM, Alvaro Herrera wrote:

Craig Ringer wrote:

For some time I've wanted a way to SET SESSION AUTHORISATION or SET
ROLE in a way that cannot simply be RESET, so that a connection may be
handed to a less-trusted service or application to do some work with.

Some years back, I checked the SQL standard for insight on how they
handle this stuff (courtesy of Jim Nasby IIRC).  It took me a while to
figure out that the way they do it is not to have a RESET command in the
first place!  In their model, you enter a secure execution context (for
example, an SQL function) by calling SET SESSION AUTHORIZATION; and once
there, the only way to revert to the original session authorization is
to exit the execution context -- and once that happens, the attacker
no longer has control.  Since they have reduced privileges, they can't
call SET SESSION AUTHORIZATION themselves to elevate their access.  In
this model, you're automatically protected.


I did address this same concern some four months ago, by suggesting to 
implement an IMPERSONATE command, as part of the rolesattributes rework.
This thought was *precisely* oriented towards the sam goal as Craig's 
suggestion.


Please keep in mind that SET ROLE and/or IMPERSONATE and/or SET SESSION 
AUTHORIZATION [WITH COOKIE] should be doable SQL-only, so no protocol 
changes whatsoever would be needed.


On the other hand, ISTM that what we all intend to achieve is some 
Postgres equivalent of the SUID bit... so why not just do something 
equivalent?

---
LOGIN-- as user with the appropriate role membership / privilege?
...
SET ROLE / SET SESSION AUTHORIZATION WITH COOKIE / IMPERSONATE

... do whatever ...-- unprivileged user can NOT do the 
impersonate thing


DISCARD ALL-- implicitly restore previous authz
---

I mentioned this in some developer meeting; got blank stares back, IIRC.


Let's hope something goes through this time. It seems to be a more 
pressing need now than it was then  :)




Thanks,

/ J.L.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in jsonb minus operator

2015-05-17 Thread Andrew Dunstan


On 05/16/2015 08:04 PM, Peter Geoghegan wrote:

I'm seeing the following problem on the master branch:

postgres=# select '{foo:5}'::jsonb - 'bar'; -- okay
   ?column?

  {foo: 5}
(1 row)

postgres=# select '{foo:{bar:5}}'::jsonb - 'foo'; -- okay
  ?column?
--
  {}
(1 row)

postgres=# select '{foo:{bar:5}}'::jsonb - 'rion'; -- spurious error
ERROR:  XX000: unknown type of jsonb container to convert
LOCATION:  convertJsonbValue, jsonb_util.c:1430

This is an elog() - a can't happen error - due to this restriction
within convertJsonbValue():

/*
  * A JsonbValue passed as val should never have a type of jbvBinary, and
  * neither should any of its sub-components. Those values will be produced
  * by convertJsonbArray and convertJsonbObject, the results of which will
  * not be passed back to this function as an argument.
  */

This call to convertJsonbValue() actually originates from the final
line of the new jsonb_delete() function, here:

#5  0x00877e10 in jsonb_delete (fcinfo=0x160e060) at jsonfuncs.c:3389
3389 PG_RETURN_JSONB(JsonbValueToJsonb(res));

I explored writing a fix for this bug. I went back and forth on it,
but I think that the most promising approach might be to simply teach
convertJsonbValue() to care about jbvBinary-typed JsonbValue
variables. That way, the jsonb_delete() function could actually expect
this to work. I can't remember why I thought it was a good idea to
have convertJsonbValue() reject jbvBinary values, but I believe the
reason was that it simply wasn't necessary.



Sure. I thought we'd covered this but it's possible that we didn't, or 
that it got rebroken. There have been complaints about the limitation on 
values containing jbvBinary, so let's just remove it if that can be done 
simply, as it seems to be a not uncommonly encountered problem.


Are you going to submit a patch for that?



jsonb_delete() is unusual in that it converts from a JsonbValue back
to the on-disk Jsonb varlena format, but with a nested jbvBinary-typed
value (in the presence of a nested object or array) -- it seems like
it wouldn't be too hard to just roll with that. jsonb_delete() makes
the mistake of not expecting to see jbvBinary values (since it doesn't
always recurse into the json structure). We shouldn't really deal with
jbvBinary-typed values specially from outside specialized utility
routines like JsonbDeepContains() as noted in the above comment,
though (so offhand I don't think we should teach jsonb_delete()
anything new, as that would be a modularity violation). Thoughts?



Assuming the above fix we won't have to teach it anything new, right?



Note that the existence related operators (that, like the minus
operator should only work at the top level) don't go through
JsonbValue conversion of the lhs value as part of their search at all.
I don't think that their findJsonbValueFromContainer() routine (or a
similar routine) is the right way of approaching this, though - that's
a specialized routine, that doesn't care if an array value (which is,
you'll recall, a key for the purposes of existence checking) appears
once or multiple times.


Seems reasonable.


On that topic, I think it's sloppy that Table
9-41. Additional jsonb Operators isn't clear about the fact that the
operator - text op matches things on the same basis as the existence
operators -- notice how the existence operator notes with emphasis
that it cares about array *string* values only.


OK, please submit a patch that you think clears it up.



Finally, I don't think an operator implementation function that is
jsonb-only belongs anywhere other than jsonb_ops.c (for the same
reason, replacePath() should live in jsonb_util.c).



The heading on jsonb_op.c (note spelling) suggests that it's for indexed 
operations. I don't mind rearranging the code layout to something people 
think more logical, but I also don't want to engage in unnecessary code 
churn. I'm not that religious about the organization.




And I'm
disappointed that the jsonb tests can no longer be run atomically with
'make installcheck TESTS=jsonb' - I liked being able to do that.


It was always known that some cases would not work with TESTS - that was 
part of Tom's reservation about the whole thing. You can say


   make check-tests TESTS=create_table copy jsonb


and while create_table will fail on an irrelevant test, copy and jsonb 
will succeed.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALWriteLock contention

2015-05-17 Thread Amit Kapila
On Sun, May 17, 2015 at 7:45 AM, Robert Haas robertmh...@gmail.com wrote:

 crazy-ideaI wonder if we could write WAL to two different files in
 alternation, so that we could be writing to one file which fsync-ing
 the other./crazy-idea


Won't the order of transactions replay during recovery can cause
problems if we do alternation while writing.  I think this is one of
the reasons WAL is written sequentially.  Another thing is that during
recovery, currently whenever we encounter mismatch in stored CRC
and actual record CRC, we call it end of recovery, but with writing
to 2 files simultaneously we might need to rethink that rule.

I think first point in your mail related to rewrite of 8K block each
time needs more thought and may be some experimentation to
check whether writing in lesser units based on OS page size or
sector size leads to any meaningful gains.  Another thing is that
if there is high write activity, then group commits should help in
reducing IO for repeated writes and in the tests we can try by changing
commit_delay to see if that can help (if the tests are already tuned
with respect to commit_delay, then ignore this point).


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] upper planner path-ification

2015-05-17 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 So, getting back to this part, what's the value of returning a list of
 Paths rather than a list of Plans?

(1) less work, since we don't have to fill in details not needed for
costing purposes;
(2) paths carry info that the planner wants but the executor doesn't,
notably sort-order annotations.

 target lists are normally computed when paths are converted to plans,
 but for the higher-level plan nodes adding by grouping_planner, the
 path list is typically just passed in.  So would the new path types be
 expected to carry target lists of their own, or would they need to
 figure out the target list on the fly at plan generation time?

Yeah, that is something I've been struggling with while thinking about
this.  I don't really want to add tlists as such to Paths, but it's
not very clear how else to annotate a Path as to what it computes,
and that seems like an annotation we have to have in some form in order
to convert these planning steps into a Path universe.

There are other cases where it would be useful to have some notion of
this kind.  An example is that right now, if you have an expression index
on an expensive function and a query that wants the value of that function,
the planner is able to extract the value from the index --- but there is
nothing that gives any cost benefit to doing so, so it's just as likely
to choose some other index and eat the cost of recalculating the function.
It seems like the only way to fix that in a principled fashion is to have
some concept that the index-scan Path can produce the function value,
and then when we come to some appropriate costing step, penalize the other
paths for having to compute the value that's available for free from this
one.

Rather than adding tlists per se to Paths, I've been vaguely toying with
a notion of identifying all the interesting subexpressions in a query
(expensive functions, aggregates, etc), giving them indexes 1..n, and then
marking Paths with bitmapsets showing which interesting subexpressions
they can produce values for.  This would make things like does this Path
compute all the needed aggregates much cheaper to deal with than a raw
tlist representation would do.  But maybe that's still not the best way.

Another point is that a Path that computes aggregates is fundamentally
different from a Path that doesn't, because it doesn't even produce the
same number of rows.  So I'm not at all sure how to visualize the idea
of a Path that computes only some aggregates, or whether it's even a
sensible thing to worry about supporting.

 One thing that seems like it might complicate things here is that a
 lot of planner functions take PlannerInfo *root as an argument.  But
 if we generate only paths in grouping_planner() and path-ify them
 later, the subquery's root will not be available when we're trying to
 do the Path - Plan transformation.

Ah, you're wrong there, because we hang onto the subquery's root already
(I forget why exactly, but see PlannerGlobal.subroots for SubPlans, and
RelOptInfo.subroot for subquery-in-FROM).  So it would not be a
fundamental problem to postpone create_plan() for a subquery.

 I think grouping_planner() is badly in need of some refactoring just
 to make it shorter.  It's over 1000 lines of code, which IMHO is a
 fairly ridiculous length for a single function.

Amen to that.  But as I said to Andrew, I think this will be a side-effect
of path-ification in this area, and is probably not something to set out
to do first.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 open items

2015-05-17 Thread Josh Berkus
On 05/15/2015 05:58 AM, Bruce Momjian wrote:
 I have processed all the open email items I can through mid-March,
 though I do have two pg_upgrade fixes pending application today.  I will
 continue processing doc fixes and major bug fixes for 9.5, but
 everything else I do will be for 9.6.
 

Did it ever occur to you, Bruce, that you've turned into the GC daemon
for the project?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-05-17 Thread Amit Kapila
On Mon, May 18, 2015 at 6:28 AM, Haribabu Kommi kommi.harib...@gmail.com
wrote:

 On Wed, Apr 22, 2015 at 10:48 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
  parallel_seqscan_v14.patch (Attached with this mail)

 This patch is not applying/working with the latest head after parallel
 mode patch got committed.
 can you please rebase the patch.


Thanks for reminding, I am planing to work on remaining review
comments in this week and will post a new version.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Disabling trust/ident authentication configure option

2015-05-17 Thread Josh Berkus

 On Wed, May 13, 2015 at 2:18 PM, Robert Haas robertmh...@gmail.com
 mailto:robertmh...@gmail.com wrote:
 All of this is fairly far afield from the original topic of this
 thread, which was whether a configure option disabling trust + ident
 authentication would be a good idea.  I said no.  Then we had a bunch
 of counter-proposals:
 
 Alvaro: Support a configure switch whose value is a comma-separated
 list of authentication methods to disable.

So, I'm going to throw in why a configure option to disable trust,
peer is an unworkable idea.

The goal here was stated to preventing authentication misconfiguration
by shortsighted admins who have superuser access and the ability to
change pg_hba.conf.  This is tantamount to giving someone a gun and
bullets, but expecting duct tape across the cartridge slot to prevent
them from loading or using the gun.

Let's say we offered a compile-time option, and then someone built a
package postgresql-9.6-secureauth.deb.  So, your lazy admin is having
trouble debugging an auth problem and wants to set trust.  But they
can't.  So they search on Google and figure out how to download and
install postgresql-9.6-normalauth.deb.  Or, alternately, they set all
passwords to password or to .  Or they put .pgpass files on all
machines.  Or they put the password in pgbouncer and set pgbouncer to
trust.

You've added exactly one additional step in their way, and not a
particularly difficult one.  It simply doesn't solve the problem you're
trying to solve, which is unsurprising, because technology has never
been able to solve the problem of untrustworthy humans with positions of
responsibility.

Now, if you wanted to add an audit log every time someone changes an
auth method in pg_hba.conf?  I'd be all for that, I can see all kinds of
uses for that, and it might actually accomplish something effective.

If you disagree with me, well, it would be very easy to hack out the
auth methods you don't like and compile your own.  It *is* open source.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 open items

2015-05-17 Thread Michael Paquier
On Mon, May 18, 2015 at 12:35 PM, Josh Berkus j...@agliodbs.com wrote:
 Did it ever occur to you, Bruce, that you've turned into the GC daemon
 for the project?

GC = global coordinator?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 open items

2015-05-17 Thread Abbas Butt
GC=Garbage Collector

On Mon, May 18, 2015 at 9:24 AM, Michael Paquier michael.paqu...@gmail.com
wrote:

 On Mon, May 18, 2015 at 12:35 PM, Josh Berkus j...@agliodbs.com wrote:
  Did it ever occur to you, Bruce, that you've turned into the GC daemon
  for the project?

 GC = global coordinator?
 --
 Michael


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers




-- 
-- 
*Abbas*
 Architect

Ph: 92.334.5100153
 Skype ID: gabbasb
www.enterprisedb.co http://www.enterprisedb.com/m
http://www.enterprisedb.com/

*Follow us on Twitter*
@EnterpriseDB

Visit EnterpriseDB for tutorials, webinars, whitepapers
http://www.enterprisedb.com/resources-community and more
http://www.enterprisedb.com/resources-community


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Robert Haas
On May 17, 2015, at 8:38 PM, Peter Geoghegan p...@heroku.com wrote:
 The current behavior does not seem acceptable for the concatenate
 operator (operator || jsonb).

I don't agree.  It seems pretty clear to me after reading the new posts that 
the behavior is not an oversight, and that's enough for me to say that we 
should leave this alone. 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 5:46 PM, Robert Haas robertmh...@gmail.com wrote:
 On May 17, 2015, at 8:38 PM, Peter Geoghegan p...@heroku.com wrote:
 The current behavior does not seem acceptable for the concatenate
 operator (operator || jsonb).

 I don't agree.  It seems pretty clear to me after reading the new posts that 
 the behavior is not an oversight, and that's enough for me to say that we 
 should leave this alone.

I've said what I wanted to say. As long as the community is
comfortable with the reality that this concatenate operator really
isn't useful for assignment within UPDATEs for most jsonb users, then
I can leave it at that. I think that this concatenate operator may
have been informally promoted as the thing that made it possible to do
jsonb UPDATEs in a declarative fashion, but as things stand that
really isn't true.

If nothing changes, let's not make the mistake of going on to
*formally* promote this concatenate operator as offering the ability
to do jsonb UPDATEs in a declarative fashion, because that would be
quite misleading.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in jsonb minus operator

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 8:04 AM, Andrew Dunstan and...@dunslane.net wrote:
 Sure. I thought we'd covered this but it's possible that we didn't, or that
 it got rebroken. There have been complaints about the limitation on values
 containing jbvBinary, so let's just remove it if that can be done simply, as
 it seems to be a not uncommonly encountered problem.

 Are you going to submit a patch for that?

I'll try and come up with something. It's not a trivial fix.

 Assuming the above fix we won't have to teach it anything new, right?

I'm not 100% sure, but I think not.

 On that topic, I think it's sloppy that Table
 9-41. Additional jsonb Operators isn't clear about the fact that the
 operator - text op matches things on the same basis as the existence
 operators -- notice how the existence operator notes with emphasis
 that it cares about array *string* values only.


 OK, please submit a patch that you think clears it up.

I was going to better document the new operators anyway, so I'll get
this in passing.

 The heading on jsonb_op.c (note spelling) suggests that it's for indexed
 operations. I don't mind rearranging the code layout to something people
 think more logical, but I also don't want to engage in unnecessary code
 churn. I'm not that religious about the organization.

I see what you mean.

 It was always known that some cases would not work with TESTS - that was
 part of Tom's reservation about the whole thing. You can say

Sure, but you added it, and you mentioned jsonb in the commit message,
before jsonb was even committed to the master branch. :-)

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] DROP TRANSFORM parsetree representation is no good

2015-05-17 Thread Tom Lane
Running the regression tests with -DCOPY_PARSE_PLAN_TREES reveals that
DROP TRANSFORM parse trees fail to be copiable:

regression=# DROP TRANSFORM IF EXISTS FOR fake_type LANGUAGE plperl;
ERROR:  unrecognized node type: 1701866608

I've not tracked down the exact reason, but it kinda looks like the
argument field has been filled with a list of bare C strings, not
a list of String nodes which is what it'd have to be to work here.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALWriteLock contention

2015-05-17 Thread Robert Haas
On May 17, 2015, at 5:57 PM, Thomas Munro thomas.mu...@enterprisedb.com wrote:
 On Sun, May 17, 2015 at 2:15 PM, Robert Haas robertmh...@gmail.com wrote:
 http://oldblog.antirez.com/post/fsync-different-thread-useless.html
 
 It suggests that an fsync in progress blocks out not only other
 fsyncs, but other writes to the same file, which for our purposes is
 just awful.  More Googling around reveals that this is apparently
 well-known to Linux kernel developers and that they don't seem excited
 about fixing it.  :-(
 
 He doesn't say, but I wonder if that is really Linux, or if it is the
 ext2, 3 and maybe 4 filesystems specifically.  This blog post talks
 about the per-inode mutex that is held while writing with direct IO.

Good point. We should probably test ext4 and xfs on a newish kernel.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-05-17 Thread Haribabu Kommi
On Wed, Apr 22, 2015 at 10:48 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 parallel_seqscan_v14.patch (Attached with this mail)

This patch is not applying/working with the latest head after parallel
mode patch got committed.
can you please rebase the patch.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] dromedary is now using -DCOPY_PARSE_PLAN_TREES

2015-05-17 Thread Tom Lane
Back in March we discussed the advisability of having a buildfarm critter
or two running with -DCOPY_PARSE_PLAN_TREES:
http://www.postgresql.org/message-id/14670.1427727...@sss.pgh.pa.us
but evidently nobody acted on the idea.  I've now turned on that option
on dromedary.  It will fail its next HEAD run, unless Peter fixes
DROP TRANSFORM first ... but I await the results for back branches with
some trepidation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 3:22 PM, Andrew Dunstan and...@dunslane.net wrote:
 So what exactly do you want me or anybody else to do now, two days *after*
 we declared (not without pain) feature freeze?

As much as I'd like to just fix the concatenate operator, I really
don't want to be the person that adds additional delay to stabilizing
9.5.

 If there is a consensus that what you want is so important that we need to
 implement the new behaviour at this late stage, I'm happy to spend time on
 it if there's a patch forthcoming. I might add that this should be an
 additional behaviour, since as Petr points out there is some validity to the
 current behviour.

The current behavior does not seem acceptable for the concatenate
operator (operator || jsonb). If we can't fix it as the concatenate
operator, I think we should change it to be operator + jsonb or
something. It should be explained and understood as an operator that
(like the original existence operator operator ? text, but unlike
the original containment operator operator @ jsonb) exists only to
support the less common hstore-style use case. It should also not be
referred to as offering concatenation, but something more specialized
than that. I'm sorry, but as things stand I don't think that the
concatenation behavior makes sense as the general purpose jsonb
concatenate operator.

I'll go with the consensus, but obviously I feel pretty strongly that
we have the behavior of operator || jsonb wrong. We should at the
very least work towards a future version where there is a operator ||
jsonb that does the right thing. I don't even like the idea of having
this as an operator + jsonb operator, though, because it introduces
an operator that is concerned with the hstore-style use case that I
described before, and yet accepts a jsonb datum on the rhs.

Again, I'm sorry that I brought this up late, and I hope that this
doesn't seem capricious. I just happen to sincerely feel that the
current state of operator || jsonb leads us in the wrong long-term
direction. Let's hear what other people think, though.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 7:16 AM, Petr Jelinek p...@2ndquadrant.com wrote:
 Agreed, if you look at jquery for example, the extend() method by default
 behaves like our current || and you have to specify that you want deep merge
 if you want the behavior described by Peter. So there is definitely point
 for both, at this time we just support only one of them, that's all.

The difference is that with jquery's extend(), you can easily
subscript the JSON document you're extending so that you do so for the
right nested object (without merging any containers the target
contains, but rather overwriting them with any of the source's
containers -- makes sense when you're explicit about the nesting level
like that). With jsonb, however, we're usually stuck with having to
write an SQL expression that evaluates to the final jsonb document
that we want (for UPDATE targetlist assignment, typically).

This is an enormous difference. We still don't have a way to update a
nested object's single field that I can see, and nested objects are
absolutely commonplace for the document database use case. So I
don't accept that this is a matter of individual preference or taste.
It's a big, practical distinction.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WALWriteLock contention

2015-05-17 Thread Thomas Munro
On Sun, May 17, 2015 at 2:15 PM, Robert Haas robertmh...@gmail.com wrote:
 http://oldblog.antirez.com/post/fsync-different-thread-useless.html

 It suggests that an fsync in progress blocks out not only other
 fsyncs, but other writes to the same file, which for our purposes is
 just awful.  More Googling around reveals that this is apparently
 well-known to Linux kernel developers and that they don't seem excited
 about fixing it.  :-(

He doesn't say, but I wonder if that is really Linux, or if it is the
ext2, 3 and maybe 4 filesystems specifically.  This blog post talks
about the per-inode mutex that is held while writing with direct IO.
Maybe fsyncing buffered IO is similarly constrained in those
filesystems.

https://www.facebook.com/notes/mysql-at-facebook/xfs-ext-and-per-inode-mutexes/10150210901610933

 crazy-ideaI wonder if we could write WAL to two different files in
 alternation, so that we could be writing to one file which fsync-ing
 the other./crazy-idea

If that is an ext3-specific problem, using multiple files might not
help you anyway because ext3 famously fsyncs *all* files when you
asked for one file to be fsynced, as discussed in Greg Smith's
PostgreSQL 9.0 High Performance in chapter 4 (page 79).

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 8:37 AM, Dmitry Dolgov 9erthali...@gmail.com wrote:
 And I agree with thoughts above, that both concatenation modes (simple and
 deep) definitely can be useful. I can try to figure out how much work that
 would be to modify the IteratorConcat function (or adapt Ilya's solution)

I tend to think of it like this: jsonb more or less supports two use cases.

Firstly, it supports the hstore use case, with a heterogeneous object
stored in each row -- a hodge-podge of different attributes, which can
be used to do something EAV-like, where there are application end user
defined attributes, say. This is unlikely to involve any nesting,
because we're only storing attributes of one entity (the row). This
isn't the use of jsonb that people got excited about, and I think it's
less important, although it does matter. The existence operator (which
operates at the least nested level) is firmly about this use case. And
for that matter, so is the new remove capability/minus operator thing
(which also operates at the least nested level). Fine.

The second use case is the document database use case, which is
where jsonb is really compelling. Think of storing more or less fixed
structure documents from a third party web API. Containment works in a
nested fashion in support of that. And as I pointed out, not having
the concatenate operator work in a nested fashion hobbles this use
case. How are users supposed to write an SQL query that update's a
single field in a nested object? That's obviously what they expect
from this.

I think it's misguided to make the concatenate operator target the
hstore use case - if you have that use case, you're unlikely to have
any nesting by convention anyway, and so it doesn't matter to you.
Besides which, as I said, the document database use case is the one
most users actually care about these days.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Andrew Dunstan


On 05/17/2015 05:56 PM, Peter Geoghegan wrote:

On Sun, May 17, 2015 at 8:37 AM, Dmitry Dolgov 9erthali...@gmail.com wrote:

And I agree with thoughts above, that both concatenation modes (simple and
deep) definitely can be useful. I can try to figure out how much work that
would be to modify the IteratorConcat function (or adapt Ilya's solution)

I tend to think of it like this: jsonb more or less supports two use cases.

Firstly, it supports the hstore use case, with a heterogeneous object
stored in each row -- a hodge-podge of different attributes, which can
be used to do something EAV-like, where there are application end user
defined attributes, say. This is unlikely to involve any nesting,
because we're only storing attributes of one entity (the row). This
isn't the use of jsonb that people got excited about, and I think it's
less important, although it does matter. The existence operator (which
operates at the least nested level) is firmly about this use case. And
for that matter, so is the new remove capability/minus operator thing
(which also operates at the least nested level). Fine.

The second use case is the document database use case, which is
where jsonb is really compelling. Think of storing more or less fixed
structure documents from a third party web API. Containment works in a
nested fashion in support of that. And as I pointed out, not having
the concatenate operator work in a nested fashion hobbles this use
case. How are users supposed to write an SQL query that update's a
single field in a nested object? That's obviously what they expect
from this.

I think it's misguided to make the concatenate operator target the
hstore use case - if you have that use case, you're unlikely to have
any nesting by convention anyway, and so it doesn't matter to you.
Besides which, as I said, the document database use case is the one
most users actually care about these days.



Peter,

Nobody is arguing that what you want isn't desirable. It just happens to 
be what we don't have. I get your disappointment, but you actually had a 
long time after the original patch was published to make your case. When 
I suggested 2 months ago to someone else that it was really too late to 
be adding this feature, nobody, including you, disagreed.


So what exactly do you want me or anybody else to do now, two days 
*after* we declared (not without pain) feature freeze?


If there is a consensus that what you want is so important that we need 
to implement the new behaviour at this late stage, I'm happy to spend 
time on it if there's a patch forthcoming. I might add that this should 
be an additional behaviour, since as Petr points out there is some 
validity to the current behviour.


If not, a function and operator can almost certainly be created with 
this behaviour as an extension for those who really need it in 9.5. I'm 
sure Dmitry will be happy to work on that.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Josh Berkus
On 05/17/2015 05:46 PM, Robert Haas wrote:
 On May 17, 2015, at 8:38 PM, Peter Geoghegan p...@heroku.com wrote:
 The current behavior does not seem acceptable for the concatenate
 operator (operator || jsonb).
 
 I don't agree.  It seems pretty clear to me after reading the new posts that 
 the behavior is not an oversight, and that's enough for me to say that we 
 should leave this alone. 

Is there a particular reason why + makes more sense as shallow
concatination and || makes more sense as deep concatination?  Like,
something in JS or other client languages which would make that
preference make more sense to users?

While I hate last-minute changes in general, once we have this
functionality as || we won't be able to swap operators later if we
decide that deep concatination should have been ||.  So I want to be
clear on why users will prefer that to + .

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Peter Geoghegan
On Sun, May 17, 2015 at 8:41 PM, Josh Berkus j...@agliodbs.com wrote:
 Is there a particular reason why + makes more sense as shallow
 concatination and || makes more sense as deep concatination?  Like,
 something in JS or other client languages which would make that
 preference make more sense to users?

 While I hate last-minute changes in general, once we have this
 functionality as || we won't be able to swap operators later if we
 decide that deep concatination should have been ||.  So I want to be
 clear on why users will prefer that to + .

This guy is talking about the concatenation operator in hstore:
https://twitter.com/toolmantim/status/589348855302344705

I don't want to present this operator as being the equivalent for
jsonb (the thing you use for assignment). I wish it was, but it just
isn't, as long as it treats the first nesting level as special. jsonb
is all about nesting in general, and its concatenate operator must
reflect this. It wouldn't be much use if operator @ jsonb didn't
care about nesting either, but it does (unlike hstore's equivalent,
because hstore doesn't nest).

I don't think that (say) an operator + jsonb ought to be called a
concatenation operator at all. The idea is to distance what we have
here from the idea of an hstore concatenate operator, and to encourage
the understanding that it has only a specialized use. I think that the
danger of someone making the wrong assumption about the new operator
|| jsonb is very real (I think that the reverse wouldn't be true,
though, if concatenation worked in a nested fashion -- that wouldn't
bother users that had non-nested jsonb documents).

As I said, I don't think that my preference for deep concatenation is
a matter of taste. I think that shallow concatenation is fundamentally
and objectively at odds with what jsonb is supposed to be (as long as
concatenation is the way nested assignment works, which is what
users have been taught to think).
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)

2015-05-17 Thread Ashutosh Bapat
On Sat, May 16, 2015 at 6:34 PM, Shigeru Hanada shigeru.han...@gmail.com
wrote:

 Attached is the v15 patch of foreign join support for postgres_fdw.

 This patch is based on current master, and having being removed some
 hunks which are not essential.

 And I wrote description of changes done by the patch.  It is little
 bit long but I hope it would help understanding what the patch does.



 The total LOC of the patch is 3.7k, 1.8k for code and 2.0k for
 regression tests.  This is not a small patch, as Robert says, so I'd
 like to summarize changed done by this patch and explain why they are
 necessary.

 Outline of join push-down support for postgres_fdw
 ==

 This patch provides new capability to join between foriegn tables
 managed by same foreign server on remote side, by constructing a
 remote query containing join clause, and executing it as source of a
 pseudo foreign scan.  This patch is based on Custom/Foreign join patch
 written by Kohei KaiGai.

 PostgreSQL's planning for a query containing join is done with these steps:

 1. generate possible scan paths for each base relations
 2. generate join paths with bottom-up approach
 3. generate plan nodes required for the cheapest path
 4. execute the plan nodes to obtain result tuples

 Generating path node
 
 As of now, postgres_fdw generates a ForeignPath which represents a
 result of a join for each RelOptInfo, and planner can determine which
 path is cheapest from its cost values.

 GetForeignJoinPaths is called once for each join combination, i.e. A
 JOIN B and B JOIN A are considered separately.  So GetForeignJoinPath
 should return immediately to skip its job when the call is the
 reversed combination of already considered one.  For this purpose, I
 added update_safe flag to PgFdwRelationInfo.  This flag is always set
 for simple foriegn scans, but for join relation it is set only when
 the join can be pushed down.  The reason of adding this flag is that
 checking RelOptInfo#fdw_private is MULL can't prevent useless
 processing for a join combination which is reversed one of already
 considered join which can't be pushed down.


This is just  a suggestion, but you may actually get rid of the flag by
restricting the path generation only when say outer relation's pointer or
OID or relid is greater/lesser than inner relation's corresponding property.



 postgres_fdw's GetForeignJoinPaths() does various checks, to ensure
 that the result has same semantics as local joins.  Now postgres_fdw
 have these criteria:

 a) join type must be one of INNER/LEFT OUTER/RIGHT OUTER/FULL OUTER join
 This check is done with given jointype argument.  IOW, CROSS joins and
 SEMI/ANTI joins are not pushed down.  This is because 1) CROSS joins
 would produe more result than separeted join sources,


We might loose on some optimizations in aggregate push-down by not creating
paths altogether for CROSS joins. If there is a count(*) on CROSS join
result, we will push count(*) since there doesn't exist a foreign path for
the join. OR it might be possible that pushing down A INNER JOIN B CROSS
JOIN C is cheaper than performing some or all of the joins locally. I think
we should create a path and let it stay in the paths list. If there is no
path which can use CROSS join path, it will discarded eventually. Sorry for
bringing this so late in the discussion.


 and 2) ANTI/SEMI
 joins need to be deparsed as sub-query and it seems to take some more
 time to implement.
 b) Both outer and inner must have RelOptInfo#fdw_private
 Having fdw_private means that the RelOptInfo is safe to push down, so
 having no fdw_private means that portion is not safe to push down and
 thus the whole join is not safe to push down.
 c) All relations in the join must belong to the same server
 This check is done with serverid stored in RelOptInfo#fdw_private as
 PgFdwRelationInfo.  Joining relations belong to different servers is
 not leagal.  Even they finally have completely same connection
 information, they should accessed via different libpq sessions.
 Responsibility of checking server matching is under discussion in the
 Custom/Foreign join thread, and I'd vote for checking it in core.  If
 it is decided, I remove this criterion soon.
 d) All relations must have the same effective user id
 This check is done with userid stored in PgFdwRelationInfo, which is
 valid only when underlying relations have the same effective user id.
 Here effective user id means id of the user executing the query, or
 the owner of the view when the foreign table is accessed via view.
 Tom suggested that it is necessary to check that user mapping matches
 for security reason, and now I think it's better than checking
 effective user as current postgres_fdw does.
 e) Each source relation must not have any local filter
 Evaluating conditions of join source talbe potentially produces
 different result in OUTER join cases.  This can be relaxed for 

Re: [HACKERS] [COMMITTERS] pgsql: Add pg_audit, an auditing extension

2015-05-17 Thread Fujii Masao
On Sun, May 17, 2015 at 11:00 PM, Stephen Frost sfr...@snowman.net wrote:
 Fujii, Michael,

 * Fujii Masao (masao.fu...@gmail.com) wrote:
 pg_audit uses 1.0.0 as its version number. But, is the third digit really
 required? Why? We usually uses the version number with two digits in
 contrib modules.

 [...]

 In Makefile, PGFILEDESC should be added.

 +# pg_audit/Makefile

 should be # contrib/pg_audit/Makefile for the consistency.

 The categories of some SQL commands are different between log_statement and
 pg_audit. For example, REINDEX is treated as DDL in pg_audit, but not in
 log_statement. That's confusing. We should use the same category-mapping
 rule as much as possible.

 [...]

 * Michael Paquier (michael.paqu...@gmail.com) wrote:
 And on top of that the following things should be changed:
 - Removal of REGRESS_OPTS which is empty
 - Removal of MODULE, which overlaps with MODULE_big
 - $(WIN32RES) needs to be added to OBJS for Windows versioning

 I've pushed these changes.

Thanks a lot!

Here are other random comments on pg_audit.
# Move to pgsql-hackers

Isn't it better to suppress STATEMENT message in audit log?
Otherwise, we always get two duplicate statements as follows, and which would
increases the audit log volume very much, I'm afraid.

LOG:  AUDIT: SESSION,1,1,READ,SELECT,,,SELECT 1;,not logged
STATEMENT:  SELECT 1;

Also CONTEXT message containing the same statement can sometimes be logged
at the same time.

When I tried the object audit feature, I got the following log messages.
The class and command type of this SELECT query is WRITE and UPDATE in the log
message. Is this intentional? Why? Maybe pg_audit treats something like
SELECT FOR SHARE/UPDATE as UPDATE command? But in session logging,
the class and command type of this query was READ and SELECT. So confusing.

LOG:  AUDIT: OBJECT,2,1,WRITE,UPDATE,TABLE,public.aaa,SELECT 1
FROM ONLY public.aaa x WHERE id OPERATOR(pg_catalog.=) $1
FOR KEY SHARE OF x,not logged
CONTEXT:  SQL statement SELECT 1 FROM ONLY public.aaa x WHERE
id OPERATOR(pg_catalog.=) $1 FOR KEY SHARE OF x
STATEMENT:  INSERT INTO bbb VALUES (1);

The queries to reproduce the above message is here.

ALTER SYSTEM SET pg_audit.role TO 'foo';
SELECT pg_reload_conf();
CREATE TABLE aaa (id int primary key);
CREATE TABLE bbb (id int references aaa(id));
INSERT INTO aaa VALUES(generate_series(1,100));
GRANT SELECT ON aaa TO foo;
INSERT INTO bbb VALUES (1);

Now the class of FETCH command is MISC, but isn't it better to classify that as
READ?

When a query contains a double quote or comma, it's quoted in the audit log
as follows. Doesn't this make the analysis of the queries in the log files a bit
difficult because other queries are not quoted?

LOG:  AUDIT: SESSION,2,1,READ,SELECT,,,SELECT '';,not logged
STATEMENT:  SELECT '';

When log_destination is csvlog, the above quoted query is quoted again.

2015-05-18 14:44:38.313
JST,postgres,postgres,17725,[local],55597c45.453d,1,SELECT,2015-05-18
14:44:37 JST,2/2,0,LOG,0,AUDIT:
SESSION,1,1,READ,SELECT,,,SELECT '';,not
logged,,SELECT '';,,,psql

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Dmitry Dolgov
 Historical note: I think it's based on the nested hstore work, not on
current hstore, but Dmitry can answer on that.

Yes, you're right.
And I agree with thoughts above, that both concatenation modes (simple
and deep) definitely can be useful. I can try to figure out how much work
that would be to modify the IteratorConcat function (or adapt Ilya's
solution)

On 17 May 2015 at 21:16, Petr Jelinek p...@2ndquadrant.com wrote:

 On 17/05/15 16:04, Andrew Dunstan wrote:


 On 05/16/2015 10:56 PM, Peter Geoghegan wrote:

 Another thing that I noticed about the new jsonb stuff is that the
 concatenate operator is based on the hstore one. This works as
 expected:

 postgres=# select '{a:1}'::jsonb || '{a:2}';
 ?column?
 --
   {a: 2}
 (1 row)

 However, the nesting doesn't match up -- containers are not merged
 beyond the least-nested level:

 postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
 nested:2}}';
   ?column?
 ---
   {a: {also nested: 2}}
 (1 row)

 This feels wrong to me. When jsonb was initially introduced, we took
 inspiration for the *containment* (operator @ jsonb) semantics from
 hstore, but since jsonb is nested it worked in a nested fashion. At
 the top level and with no nested containers there was no real
 difference, but we had to consider the behavior of more nested levels
 carefully (the containment operator is clearly the most important
 jsonb operator). I had envisaged that with the concatenation of jsonb,
 concatenation would similarly behave in a nested fashion. Under this
 scheme, the above query would perform nested concatenation as follows:

 postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
 nested:2}}'; -- does not match actual current behavior
   ?column?
 ---
   {a: {nested:1, also nested: 2}}
 (1 row)

 Now, I think it's good that the minus operator (operator - text and
 friends) discussed on the nearby thread accepts a text (or int)
 argument and remove string elements/pairs at the top level only. This
 works exactly the same as existence (I happen to think that removing
 elements/pairs at a nested level is likely to be more trouble than
 it's worth, and so I don't really like the new jsonb - text[]
 operator much, because it accepts a Postgres (not JSON) array of texts
 that constitute a path, which feels odd). So I have no issue with at
 least the plain minus operators' semantics. But I think that the
 concatenate operator's current semantics are significantly less useful
 than they could be, and are not consistent with the overall design of
 jsonb.


 Historical note: I think it's based on the nested hstore work, not on
 current hstore, but Dmitry can answer on that.

 I didn't dismiss this because it was a bad idea, but because it was too
 late in the process. If there is a consensus that we need to address
 this now then I'm happy to reopen that, but given the recent amount of
 angst about process I'm certainly not going to make such a decision
 unilaterally.

 Personally, I think there is plenty of room for both operations, and I
 can see use cases for both. If I were designing I'd leave || as it is
 now and add a + operation to do a recursive merge. I'm not sure how much
 work that would be. Not huge but not trivial either.


 Agreed, if you look at jquery for example, the extend() method by default
 behaves like our current || and you have to specify that you want deep
 merge if you want the behavior described by Peter. So there is definitely
 point for both, at this time we just support only one of them, that's all.


 --
  Petr Jelinek  http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training  Services



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] WALWriteLock contention

2015-05-17 Thread Robert Haas
On May 17, 2015, at 11:04 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Sun, May 17, 2015 at 7:45 AM, Robert Haas robertmh...@gmail.com wrote:
 
  crazy-ideaI wonder if we could write WAL to two different files in
  alternation, so that we could be writing to one file which fsync-ing
  the other./crazy-idea
 
 Won't the order of transactions replay during recovery can cause
 problems if we do alternation while writing.  I think this is one of
 the reasons WAL is written sequentially.  Another thing is that during
 recovery, currently whenever we encounter mismatch in stored CRC
 and actual record CRC, we call it end of recovery, but with writing
 to 2 files simultaneously we might need to rethink that rule.

Well, yeah. That's why I said it was a crazy idea.

 I think first point in your mail related to rewrite of 8K block each
 time needs more thought and may be some experimentation to
 check whether writing in lesser units based on OS page size or
 sector size leads to any meaningful gains.  Another thing is that
 if there is high write activity, then group commits should help in
 reducing IO for repeated writes and in the tests we can try by changing
 commit_delay to see if that can help (if the tests are already tuned
 with respect to commit_delay, then ignore this point).

I am under the impression that using commit_delay usefully is pretty hard but, 
of course, I could be wrong.

...Robert

Re: [HACKERS] Problems with question marks in operators (JDBC, ECPG, ...)

2015-05-17 Thread David G. Johnston
On Sun, May 17, 2015 at 9:15 AM, Greg Sabino Mullane g...@turnstep.com
wrote:

 Dave Cramer wrote:
  Well our solution was to use ?? but that does mean we have to do some
  extra parsing which in a perfect world wouldn't be necessary.

 That's not a good solution as '??' is a perfectly valid operator. ISTR
 seeing it used somewhere in the wild, but I could be wrong.


​It which case you would write (I think, not tested and not part of the
test suite that I can see...):

a  b ...

​There was some discussion about ?? vs \?:

https://github.com/pgjdbc/pgjdbc/pull/187

I did find some alternatives discussed a couple of years back, like
{postgres qm} and operator(?); the later simply being to allow the
operator to be quoted inside operator()

​
http://postgresql.nabble.com/Alias-hstore-s-to-so-that-it-works-with-JDBC-td5743863i20.html

​The commit that added ??:

​https://github.com/pgjdbc/pgjdbc/pull/227

David J.


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Robert Haas
 On May 16, 2015, at 10:56 PM, Peter Geoghegan p...@heroku.com wrote:
 
 Another thing that I noticed about the new jsonb stuff is that the
 concatenate operator is based on the hstore one. This works as
 expected:
 
 postgres=# select '{a:1}'::jsonb || '{a:2}';
 ?column?
 --
 {a: 2}
 (1 row)
 
 However, the nesting doesn't match up -- containers are not merged
 beyond the least-nested level:
 
 postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also nested:2}}';
 ?column?
 ---
 {a: {also nested: 2}}
 (1 row)
 
 This feels wrong to me. When jsonb was initially introduced, we took
 inspiration for the *containment* (operator @ jsonb) semantics from
 hstore, but since jsonb is nested it worked in a nested fashion. At
 the top level and with no nested containers there was no real
 difference, but we had to consider the behavior of more nested levels
 carefully (the containment operator is clearly the most important
 jsonb operator). I had envisaged that with the concatenation of jsonb,
 concatenation would similarly behave in a nested fashion. Under this
 scheme, the above query would perform nested concatenation as follows:
 
 postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
 nested:2}}'; -- does not match actual current behavior
 ?column?
 ---
 {a: {nested:1, also nested: 2}}
 (1 row)

I'm not as much of a JSON user as some here, for sure, but for what it's worth 
my intuition here matches yours.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFC: Non-user-resettable SET SESSION AUTHORISATION

2015-05-17 Thread José Luis Tallón

On 05/17/2015 07:39 PM, Tom Lane wrote:

=?windows-1252?Q?Jos=E9_Luis_Tall=F3n?= jltal...@adv-solutions.net writes:

On the other hand, ISTM that what we all intend to achieve is some
Postgres equivalent of the SUID bit... so why not just do something
equivalent?
---
  LOGIN-- as user with the appropriate role membership / privilege?
  ...
  SET ROLE / SET SESSION AUTHORIZATION WITH COOKIE / IMPERSONATE
  ... do whatever ...-- unprivileged user can NOT do the
impersonate thing
  DISCARD ALL-- implicitly restore previous authz
---

Oh?  What stops the unprivileged user from doing DISCARD ALL?


Indeed. The pooler would need to block this.
Or we would need to invent another (this time, privileged) verb in order 
to restore authz.



I think if we have something like this, it has to be non-resettable
period: you can't get back the old session ID except by reconnecting
and re-authorizing.  Otherwise there's just too much risk of security
holes.


Yes.
Thank you for your feedback, Tom.


/ J.L.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Minor ON CONFLICT related fixes

2015-05-17 Thread Peter Geoghegan
On Tue, May 12, 2015 at 4:23 PM, Peter Geoghegan p...@heroku.com wrote:
 Rebased version of patch is attached.

FYI, I found an unrelated bug within ruleutils.c (looks like the
targetlist kludge in set_deparse_planstate() isn't sufficiently
general):

postgres=# explain insert into upsert as u values('Bat', 'Bar') on
conflict (key) do update set val = excluded.val where exists (select 1
from upsert ii where ii.key = excluded.key);
ERROR:  XX000: bogus varno: 65000
LOCATION:  get_variable, ruleutils.c:5916

Obviously Vars of varno INNER_VAR are not expected here -- the subplan
makes set_deparse_planstate() not prepare get_variable() in the same
way that it prepares similar though simpler cases (e.g. no
subquery/subplan).

I don't have any bright ideas about how to fix this offhand. I believe
in principle that we ought to be able to fish through parent planstate
within set_deparse_planstate(), just in case it is called under these
circumstances, but that's pretty ugly, and is usually going to be
unnecessary. Doesn't look like there is a good way to delay the work
till get_variable() can see what looks like this case, either.
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: use foreign keys to improve join estimates v1

2015-05-17 Thread David Rowley
On 7 April 2015 at 13:41, Tomas Vondra tomas.von...@2ndquadrant.com wrote:


 (1) The current patch only does the trick when the FK matches the
 conditions perfectly - when there are no missing columns (present
 in the FK, not covered by a condition).


Hi Tomas,

I did glance at this patch a while back, but just thinking on it again.

I think if you find any quals that are a part of *any* foreign key between
the 2 join tables, then you should be never assume these quals to reduce
the number of rows. I believe this should be the case even if you don't
fully match all columns of the foreign key.

If we modify your example a little, let's say your foreign key between fact
and dim is made from 3 columns (a,b,c)

If we do:

EXPLAIN SELECT * FROM fact f JOIN dim d USING (a,b);

Then we should always (under normal circumstances) find at least one
matching row, although in this case since the join qual for c is missing,
we could find more than 1 matching row.

Without digging too deep here, I'd say that the best way to do this would
be to either have calc_joinrel_size_estimate() build a list of restrictinfo
items of all quals that are NOT part of any foreign key and pass that
trimmed list down to clauselist_selectivity() for selectivity estimates. Or
perhaps a better way would be just to teach clauselist_selectivity() about
foreign keys. Likely clauselist_selectivity() would just have to skip over
RestrictInfo items that are part of a foreign key.

Regards

David Rowley


[HACKERS] fix typos

2015-05-17 Thread Dmitriy Olshevskiy

Hello. Please, see this patch with typos.
Thank you.

--
Dmitriy Olshevskiy

From e9c463e50efdaa1fbdcabea92cd95cbffea3d3bd Mon Sep 17 00:00:00 2001
From: olshevskiy87 olshevski...@bk.ru
Date: Sun, 17 May 2015 15:40:40 +0400
Subject: [PATCH] fix typos

---
 contrib/tsm_system_time/tsm_system_time.c   |  2 +-
 doc/src/sgml/logicaldecoding.sgml   |  2 +-
 doc/src/sgml/ref/pg_dumpall.sgml|  2 +-
 src/backend/access/gin/ginpostinglist.c | 14 +++---
 src/backend/access/heap/heapam.c|  2 +-
 src/backend/access/nbtree/README|  2 +-
 src/backend/access/transam/xact.c   |  4 ++--
 src/backend/replication/logical/snapbuild.c |  2 +-
 src/backend/tsearch/dict_synonym.c  |  2 +-
 src/backend/utils/adt/jsonfuncs.c   |  2 +-
 src/test/regress/expected/event_trigger.out |  2 +-
 src/test/regress/sql/event_trigger.sql  |  2 +-
 12 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/contrib/tsm_system_time/tsm_system_time.c b/contrib/tsm_system_time/tsm_system_time.c
index efb127c..9af9e74 100644
--- a/contrib/tsm_system_time/tsm_system_time.c
+++ b/contrib/tsm_system_time/tsm_system_time.c
@@ -267,7 +267,7 @@ tsm_system_time_cost(PG_FUNCTION_ARGS)
 			  NULL);
 
 	/*
-	 * Assumption here is that we'll never read less then 1% of table pages,
+	 * Assumption here is that we'll never read less than 1% of table pages,
 	 * this is here mainly because it is much less bad to overestimate than
 	 * underestimate and using just spc_random_page_cost will probably lead
 	 * to underestimations in general.
diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml
index 00f6eee..5fa2f77 100644
--- a/doc/src/sgml/logicaldecoding.sgml
+++ b/doc/src/sgml/logicaldecoding.sgml
@@ -649,7 +649,7 @@ OutputPluginWrite(ctx, true);
titleSynchronous Replication Support for Logical Decoding/title
 
para
-Logical decoding can be used to to build
+Logical decoding can be used to build
 link linkend=synchronous-replicationsynchronous
 replication/link solutions with the same user interface as synchronous
 replication for link linkend=streaming-replicationstreaming
diff --git a/doc/src/sgml/ref/pg_dumpall.sgml b/doc/src/sgml/ref/pg_dumpall.sgml
index fcf5f77..0444f09 100644
--- a/doc/src/sgml/ref/pg_dumpall.sgml
+++ b/doc/src/sgml/ref/pg_dumpall.sgml
@@ -454,7 +454,7 @@ PostgreSQL documentation
   termoption--database=replaceabledbname/replaceable/option/term
   listitem
para
- Specifies the name of the database to connect to to dump global
+ Specifies the name of the database to connect to dump global
  objects and discover what other databases should be dumped. If
  not specified, the literalpostgres/literal database will be used,
  and if that does not exist, literaltemplate1/literal will be used.
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 6337b1a..e6d7c1a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -23,7 +23,7 @@
 /*
  * For encoding purposes, item pointers are represented as 64-bit unsigned
  * integers. The lowest 11 bits represent the offset number, and the next
- * lowest 32 bits are the block number. That leaves 17 bits unused, ie.
+ * lowest 32 bits are the block number. That leaves 17 bits unused, i.e.
  * only 43 low bits are used.
  *
  * These 43-bit integers are encoded using varbyte encoding. In each byte,
@@ -51,16 +51,16 @@
  * Removing number is actually replacement of two numbers with their sum. We
  * have to prove that varbyte encoding of a sum can't be longer than varbyte
  * encoding of its summands. Sum of two numbers is at most one bit wider than
- * than the larger of the summands. Widening a number by one bit enlarges its
- * length in varbyte encoding by at most one byte. Therefore, varbyte encoding
- * of sum is at most one byte longer than varbyte encoding of larger summand.
- * Lesser summand is at least one byte, so the sum cannot take more space than
- * the summands, Q.E.D.
+ * the larger of the summands. Widening a number by one bit enlarges its length
+ * in varbyte encoding by at most one byte. Therefore, varbyte encoding of sum
+ * is at most one byte longer than varbyte encoding of larger summand. Lesser
+ * summand is at least one byte, so the sum cannot take more space than the
+ * summands, Q.E.D.
  *
  * This property greatly simplifies VACUUM, which can assume that posting
  * lists always fit on the same page after vacuuming. Note that even though
  * that holds for removing items from a posting list, you must also be
- * careful to not cause expansion e.g when merging uncompressed items on the
+ * careful to not cause expansion e.g. when merging uncompressed items on the
  * page into the compressed lists, when vacuuming.
  */
 
diff --git a/src/backend/access/heap/heapam.c 

Re: [HACKERS] fix typos

2015-05-17 Thread Magnus Hagander
On Sun, May 17, 2015 at 1:46 PM, Dmitriy Olshevskiy olshevski...@bk.ru
wrote:

 Hello. Please, see this patch with typos.
 Thank you.


Thanks, applied!

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] jsonb concatenate operator's semantics seem questionable

2015-05-17 Thread Andrew Dunstan


On 05/16/2015 10:56 PM, Peter Geoghegan wrote:

Another thing that I noticed about the new jsonb stuff is that the
concatenate operator is based on the hstore one. This works as
expected:

postgres=# select '{a:1}'::jsonb || '{a:2}';
?column?
--
  {a: 2}
(1 row)

However, the nesting doesn't match up -- containers are not merged
beyond the least-nested level:

postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also nested:2}}';
  ?column?
---
  {a: {also nested: 2}}
(1 row)

This feels wrong to me. When jsonb was initially introduced, we took
inspiration for the *containment* (operator @ jsonb) semantics from
hstore, but since jsonb is nested it worked in a nested fashion. At
the top level and with no nested containers there was no real
difference, but we had to consider the behavior of more nested levels
carefully (the containment operator is clearly the most important
jsonb operator). I had envisaged that with the concatenation of jsonb,
concatenation would similarly behave in a nested fashion. Under this
scheme, the above query would perform nested concatenation as follows:

postgres=# select '{a:{nested:1}}'::jsonb || '{a:{also
nested:2}}'; -- does not match actual current behavior
  ?column?
---
  {a: {nested:1, also nested: 2}}
(1 row)

Now, I think it's good that the minus operator (operator - text and
friends) discussed on the nearby thread accepts a text (or int)
argument and remove string elements/pairs at the top level only. This
works exactly the same as existence (I happen to think that removing
elements/pairs at a nested level is likely to be more trouble than
it's worth, and so I don't really like the new jsonb - text[]
operator much, because it accepts a Postgres (not JSON) array of texts
that constitute a path, which feels odd). So I have no issue with at
least the plain minus operators' semantics. But I think that the
concatenate operator's current semantics are significantly less useful
than they could be, and are not consistent with the overall design of
jsonb.

I'm particularly concerned about a table containing many homogeneously
structured, deeply nested jsonb datums (think of the delicious URLs
dataset that jsonb was originally tested using for a good example of
that -- this is quite representative of how people use jsonb in the
real world). It would be almost impossible to perform insert-or-update
type operations to these deeply nested elements using hstore style
concatenation. You'd almost invariably end up removing a bunch of
irrelevant nested values of the documents, when you only intended to
update one deeply nested value.

Looking back at the discussion of the new jsonb stuff, a concern was
raised along these lines by Ilya Ashchepkov [1], but this was
dismissed. I feel pretty strongly that this should be revisited. I'm
willing to concede that we might not want to always merge containers
that are found in the same position during concatenation, but I think
it's more likely that we do. As with containment, my sense is that
there should be nothing special about the nesting level -- it should
not influence whether we merge rather than overwrite the operator's
lhs container (with or into the rhs container). Not everyone will
agree with this [2].

I'm sorry that I didn't get to this sooner, but I was rather busy when
it was being discussed.

[1] http://www.postgresql.org/message-id/55006879.2050...@dunslane.net
[2] http://www.postgresql.org/message-id/54ef61dd.7040...@agliodbs.com


Historical note: I think it's based on the nested hstore work, not on 
current hstore, but Dmitry can answer on that.


I didn't dismiss this because it was a bad idea, but because it was too 
late in the process. If there is a consensus that we need to address 
this now then I'm happy to reopen that, but given the recent amount of 
angst about process I'm certainly not going to make such a decision 
unilaterally.


Personally, I think there is plenty of room for both operations, and I 
can see use cases for both. If I were designing I'd leave || as it is 
now and add a + operation to do a recursive merge. I'm not sure how much 
work that would be. Not huge but not trivial either.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers