Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Atri Sharma
Thanks!

I was wary to use SPI inside the executor for node evaluation functions.
Does it seem safe?

Regards,

Atri
On 5 Jan 2016 12:20 am, "Jim Nasby"  wrote:

> On 1/4/16 12:07 PM, Atri Sharma wrote:
>
>> Hi All,
>>
>> I wanted to check if it is possible to query a non catalog table in
>> backend.
>>
>> I understand that cql interface is only for catalog querying hence it is
>> not usable for this purpose per se.
>>
>
> AFAIK it's possible to do it with low level routines, but I think unless
> you have a really good reason to go that route you're much better off just
> using SPI. If it's good enough for plpgsql... :)
> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Stephen Frost
* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Thu, Dec 31, 2015 at 4:26 PM, Noah Misch  wrote:
> > The proposed pg_replication role introduces abstraction that could, as you
> > hope, spare a DBA from studying sets of functions to grant together.  The
> > pg_rotate_logfile role, however, does not shield the DBA from complexity.
> > Being narrowly tied to a specific function, it's just a suboptimal spelling 
> > of
> > GRANT.  The gap in GRANT has distorted the design for these predefined 
> > roles.
> > I do not anticipate a sound design discussion about specific predefined 
> > roles
> > so long as the state of GRANT clouds the matter.
> 
> As the patch stands, the system roles that are just close to synonyms
> of GRANT/REVOKE are the following:
> - pg_file_settings, which works just on the system view
> pg_file_settings and the function pg_show_all_file_settings().
> - pg_rotate_logfile as mentioned already.

The above are fine, I believe.

> - pg_signal_backend, which is just checked once in pg_signal_backend

This is a bit more complicated.  pg_signal_backend() isn't directly
exposed, pg_cancel_backend() and pg_termiante_backend() are.  I'm not
saying that doesn't mean we should, necessairly, keep the
pg_signal_backend role, just that it's more than just a single function.

Further, pg_terminate_backend and pg_cancel_backend are callable by
anyone today.  We'd have to invent new functions or change user-visible
behavior in an unfortunate way- we don't want *anyone* to be able to
call those functions on *anyone* unless they've been specifically
granted that access.  Today, you can cancel and/or terminate your own
backends and superusers can cancel and/or termiante anyone's.  The point
of pg_signal_backend was to allow non-superusers to cancel and/or
terminate any non-superuser-started backends.  With the default role
approach, we can provide exactly the intended semantics and not break
backwards compatibility for those functions.

> Based on those concerns, it looks clear that they should be shaved off
> from the patch.

I'd like to hear back regarding the summary email that I sent to Noah
and the follow-up I sent to Robert, as they have time to reply, of
course.  It's certainly easy enough to remove those default roles if
that's the consensus though.

> >> > To summarize, I think the right next step is to resume designing pg_dump
> >> > support for system object ACLs.  I looked over your other two patches 
> >> > and will
> >> > unshelve those reviews when their time comes.
> >>
> >> To be clear, I don't believe the two patches are particularly involved
> >> with each other and don't feel that one needs to wait for the other.
> >
> > Patch 2/3 could stand without patch 3/3, but not vice-versa.  It's patch 2/3
> > that makes pg_dumpall skip ^pg_ roles, and that must be in place no later 
> > than
> > the first patch that adds a predefined ^pg_ role.
> 
> I am a bit confused by this statement, and I disagree with Stephen's
> point as well. It seems to me that 2/3 exists *because* 3/3 is here.
> Why would we want to restrict the role names that can be used if we
> are not going to introduce some system roles that are created at
> bootstrap?

My comment was, evidently, not anywhere near clear enough.  The two
patches I was referring to were the pg_dump-support-for-catalog-ACLs and
the default-roles patches.  Apologies for the confusion there.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] 9.5 BLOCKER: regrole and regnamespace and quotes

2016-01-04 Thread Jim Nasby

On 1/3/16 10:37 PM, Tom Lane wrote:

It's not a release stopper, but I plan to fix it in HEAD whenever I have
an idle hour.


Since I'm sure there's much better things you can be working on I was 
going to just do this myself. Then it occurred to me that this should be 
a great item for a new hacker to handle, so I'm going to figure out what 
we're doing with those things now-a-days and put it there.


If no one picks it up I'll get it into the last commitfest.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Alvaro Herrera
Robert Haas wrote:

> But you could also write SELECT relname FROM pg_class WHERE
> pg_relation_size(oid) > 100 * 1024^3, which is actually fewer
> characters.  Maybe pg_size_bytes('100 GB') is easier for some people
> to remember than 100 * 1024^3, but I'm probably not one of those
> people.

Nah, that might work for geek types, but I doubt it's the preferred
spelling for most people.  I think the proposal is quite reasonable.

If we were only catering for people who can do 2^10 arithmetic off the
top of their heads, we wouldn't have pg_size_pretty at all, would we?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Jim Nasby

On 1/4/16 12:07 PM, Atri Sharma wrote:

Hi All,

I wanted to check if it is possible to query a non catalog table in backend.

I understand that cql interface is only for catalog querying hence it is
not usable for this purpose per se.


AFAIK it's possible to do it with low level routines, but I think unless 
you have a really good reason to go that route you're much better off 
just using SPI. If it's good enough for plpgsql... :)

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] remove wal_level archive

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 11:04 AM, Alvaro Herrera
 wrote:
> Peter Eisentraut wrote:
>> So we've had several rounds of discussions about simplifying replication
>> configuration in general and the wal_level setting in particular. [0][1]
>>  Let's get something going.
>
> I looked at this patch, which I think has got enough consensus that you
> should just push forward with the proposed design -- in particular, just
> remove one of archive or hot_standby values, not keep it as a synonym of
> the other.  If we're counting votes, I prefer keeping hot_standby over
> archive.

I see precisely 0 votes for that alternative upthread.  I came the
closest of anyone to endorsing that proposal, I think, but my
preferred alternative is to change nothing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Tue, Dec 29, 2015 at 5:35 AM, Stephen Frost  wrote:
> > * Noah Misch (n...@leadboat.com) wrote:
> >> > Updated patch attached.  I'll give it another good look and then commit
> >> > it, barring objections.
> >>
> >> This thread and its satellite[1] have worked their way through a few 
> >> designs.
> >> At first, it was adding role attributes, alongside existing attributes like
> >> REPLICATION and BYPASSRLS.  It switched[2] to making pg_dump preserve ACLs 
> >> on
> >> system objects.  Built-in roles joined[3] the pg_dump work to offer 
> >> predefined
> >> collections of ACL grants.  Finally, it dropped[4] the pg_dump side and
> >> hard-coded the roles into the features they govern.
> >
> > Correct, after quite a bit of discussion and the conclusion that, while
> > pg_dump support for dumping ACLs might be interesting, it was quite a
> > bit more complex an approach than this use-case justified.
> 
> Hmm.  I don't think I agree with that conclusion.  Who were the
> participants in that discussion, and how many people spoke in favor
> from moving on from that proposal - which I rather liked - to what
> you've got now?  Do you have links to the relevant portion of the
> relevant thread?

I'm not sure it's entirely relevant now- I've outlined the reasoning in
my email to Noah as a, hopefully, pretty comprehensive summary.  If that
doesn't sway your minds then it seems unlikely that a reference to a
thread from 6 months or a year ago would.  Further, as happens, other
discussions were in person where I discussed the ideas with other
hackers at conferences.  I got generally positive responses to the idea
of default roles with specific sets of rights, which is the path that
I've been on since.  As with most decisions, there was not a formal vote
over the proposals for me to reference.  I do specifically recall the
opinion that sets of privileges probably make more sense than granting
out individual ones, from Tom, if I'm remembering correctly.

In any case, I can work on the pg_dump support for catalog ACLs if there
is agreement now on that being the direction to go in.  If there's
agreement that we are happy with the idea of default roles also then I
can strip out those few which are only one-permission and which
therefore wouldn't be necessary, if we had ACL support on catalog
objects, quite easily.

> I think it's not a very good thing if we add roles that allow, say,
> execution of exactly one SQL function.  The
> dump-grants-on-system-objects proposal would accomplish the same thing
> in a much more flexible way, and with less catalog clutter.

For my 2c, I don't see a default role or two as creating terribly much
clutter.  Changing all of our code that currently has internal checks to
rely on the external checks and adjusting the permissions on the
individual functions will be a fair bit of churn though.

> More
> broadly, I'm not very convinced that even the roles which allow for
> rights on multiple objects are going to meet with general approval.

There's been rather little oposition to the idea and support when I've
discussed it.  Of course, that was before it got to the point where I
was planning to commit it.  Perhaps there will be once it's actually in,
or maybe not until it's in the wild.  In any case, I continue to feel,
as others have, that we can make adjustments moving forward.

> There has been enough discussion of which roles should be created and
> which things should be included in each one, and the overall tenor of
> that discussion seems to be that different people have different
> ideas.

Michael had a question about pg_switch_xlog, but he appeared to
reconsider that position after the subsequent discussion, which put us
back to essentially the same proposal that I started with, I believe.  I
don't recall terribly much other discussion or concern about what roles
should be created or what should be included in each one, though I'd be
happy to hear your thoughts on what you'd like to see.

I certainly don't like the idea of punting on all of this to the user to
figure out, even if it does meet the specific requirements our clients
have asked for, it strikes me that we can do better.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Broken lock management in policy.c.

2016-01-04 Thread Stephen Frost
* Peter Geoghegan (p...@heroku.com) wrote:
> On Mon, Jan 4, 2016 at 12:13 PM, Tom Lane  wrote:
> > I've pushed an example based on your original test case.  Feel free
> > to suggest improvements, although at this point they'll probably
> > land in 9.5.1.
> 
> I think that that's a vast improvement. I probably should have pushed
> for a worked out example myself, but there are only so many hours in
> the day.

Agreed on it being a vast improvement and that there are only so many
hours in the day.

> I've reviewed your changes, and have nothing further to add.

Was just doing that myself and I agree that it looks good.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Broken lock management in policy.c.

2016-01-04 Thread Tom Lane
[ getting back to this now that there's a little time ]

Peter Geoghegan  writes:
> On Sun, Jan 3, 2016 at 7:01 PM, Peter Geoghegan  wrote:
>> I would also advise only referencing a single relation within the
>> SELECT FOR UPDATE.

> To state what may be obvious: We should recommend that SELECT FOR
> SHARE appear in the CREATE POLICY USING qual as part of this
> workaround (not SELECT FOR UPDATE), because there is no need for
> anything stronger than that. We only need to prevent the admin
> updating a referenced-in-using-qual tuple in a way that allows a
> malicious user to exploit an inconsistency in tuple visibility during
> EPQ rechec. (Using SELECT FOR KEY SHARE would not reliably workaround
> the underlying issue, though.)

Right, SELECT FOR SHARE would be sufficient and would reduce the
concurrency penalty a bit.

It might be possible to use SELECT FOR KEY SHARE if you knew that
the column you needed to check was a unique-key column, but that
seems unlikely to be common, so I think we can omit the point from
our example.

I'll go draft something up ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Further tweaking of print_aligned_vertical().

2016-01-04 Thread Greg Stark
On Mon, Jan 4, 2016 at 6:18 PM, Tom Lane  wrote:
>> VAX (simh):
>> https://docs.google.com/presentation/d/1PG-bMU4WiS1pjtBwRvH4cE-nN9y5gj8ZLCO7KMrlvYg/view
>
>> Fuzzer:
>> https://docs.google.com/presentation/d/12Dd9Bhcugkdi2w0ye4T1fy9ccjconJEz9cNthdeyH7k/view
>
> Very cool, thanks!

Incidentally, I did boot up the simulator again the other day because
I had had an idea for how we might get the regression tests to pass.
However the idea didn't pan out. I thought maybe we could compile just
float.c with -msoft-float so that the backend used VAX floats and the
user datatype was represented by IEEE floats. I'm sure there are a few
places where we assume we can convert one to the other but I was
hoping it would be few enough to be manageable to fix up. I wasn't
sure how feasible it was to have a mixed build at all and I was going
to experiment.

However gcc doesn't support -msoft-float on VAX. Apparently
-msoft-float is an implemented by the architecture backend rather than
as a portable general purpose feature :(


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] remove wal_level archive

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 3:05 PM, Alvaro Herrera  wrote:
> Robert Haas wrote:
>> On Mon, Jan 4, 2016 at 11:04 AM, Alvaro Herrera
>>  wrote:
>> > Peter Eisentraut wrote:
>> >> So we've had several rounds of discussions about simplifying replication
>> >> configuration in general and the wal_level setting in particular. [0][1]
>> >>  Let's get something going.
>> >
>> > I looked at this patch, which I think has got enough consensus that you
>> > should just push forward with the proposed design -- in particular, just
>> > remove one of archive or hot_standby values, not keep it as a synonym of
>> > the other.  If we're counting votes, I prefer keeping hot_standby over
>> > archive.
>>
>> I see precisely 0 votes for that alternative upthread.  I came the
>> closest of anyone to endorsing that proposal, I think, but my
>> preferred alternative is to change nothing.
>
> Hm?  Perhaps the problem is that the thread stood on the shoulders of
> larger threads.  Andres Freund and Magnus Hagander both expressed,
> independently, their desire to see one of these modes go away:
> http://www.postgresql.org/message-id/20150203124317.ga24...@awork2.anarazel.de
> http://www.postgresql.org/message-id/CABUevEy15Y=sF8doKjD86eujJZL=Tq2jyUSiYVkg6EVwVt=c...@mail.gmail.com

OK, I see.

> Actually, in the first of these threads
> http://www.postgresql.org/message-id/flat/20150203124317.ga24...@awork2.anarazel.de
> there's a lot of discussion about getting rid of wal_level completely
> instead, but it doesn't look like there's any movement at all in that
> direction.  I think we should take this pretty small change that
> improves things a bit in that direction, then others can continue to
> propose further simplifications to our configuration in that area.
>
> We could continue to do nothing, but then we've already been doing that
> for some time and it hasn't changed things much.

True.  But it hasn't really caused much trouble either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE behind-the-scenes effects' CONTEXT

2016-01-04 Thread Pavel Stehule
2016-01-04 21:44 GMT+01:00 Alvaro Herrera :

> Pavel Stehule wrote:
> > 2015-10-05 0:08 GMT+02:00 Marko Tiikkaja :
> >
> > > In the past I've found the error message in cases such as this somewhat
> > > less helpful than it could be:
> > >
> > > =# CREATE TABLE qqq (a int);
> > > =# CREATE UNIQUE INDEX IF NOT EXISTS qqq_a_idx ON qqq(a);
> > > =# ALTER TABLE qqq ALTER COLUMN a TYPE json USING NULL;
> > > ERROR:  data type json has no default operator class for access method
> > > "btree"
> > > HINT:  You must specify an operator class for the index or define a
> > > default operator class for the data type.
> > >
> > > The attached patch adds a CONTEXT line to index and constraint
> rebuilds,
> > > e.g:
> > >
> > >   CONTEXT:  while rebuilding index qqq_a_idx
> >
> > I prefer using DETAIL field for this case.
>
> I think DETAIL doesn't work for this; the advantage of CONTEXT is that
> it can be set not at the location of the actual error but at the calling
> code, by setting up a callback.  I think Marko got that right.
>

ok

Regards

Pavel


>
> I'd add some quoting to the object name in the message, and make sure we
> don't crash if the name isn't set up (i.e. test for nullness).  But
> also, why do we set the name only in ATPostAlterTypeParse()?  Maybe
> there are more ALTER TABLE subcommands that should be setting something
> up?  In cases where multiple subcommands are being run, it might be
> useful to see which one caused a certain error message.
>
> I think some additional tests wouldn't hurt.
>
> I await feedback from Simon Riggs, who set himself up as reviewer a
> couple of days ago.  Simon, do you also intend to be committer?  If so,
> please mark yourself as such in the commitfest app.
>
> --
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [HACKERS] remove wal_level archive

2016-01-04 Thread Alvaro Herrera
Robert Haas wrote:
> On Mon, Jan 4, 2016 at 11:04 AM, Alvaro Herrera
>  wrote:
> > Peter Eisentraut wrote:
> >> So we've had several rounds of discussions about simplifying replication
> >> configuration in general and the wal_level setting in particular. [0][1]
> >>  Let's get something going.
> >
> > I looked at this patch, which I think has got enough consensus that you
> > should just push forward with the proposed design -- in particular, just
> > remove one of archive or hot_standby values, not keep it as a synonym of
> > the other.  If we're counting votes, I prefer keeping hot_standby over
> > archive.
> 
> I see precisely 0 votes for that alternative upthread.  I came the
> closest of anyone to endorsing that proposal, I think, but my
> preferred alternative is to change nothing.

Hm?  Perhaps the problem is that the thread stood on the shoulders of
larger threads.  Andres Freund and Magnus Hagander both expressed,
independently, their desire to see one of these modes go away:
http://www.postgresql.org/message-id/20150203124317.ga24...@awork2.anarazel.de
http://www.postgresql.org/message-id/CABUevEy15Y=sF8doKjD86eujJZL=Tq2jyUSiYVkg6EVwVt=c...@mail.gmail.com

Actually, in the first of these threads
http://www.postgresql.org/message-id/flat/20150203124317.ga24...@awork2.anarazel.de
there's a lot of discussion about getting rid of wal_level completely
instead, but it doesn't look like there's any movement at all in that
direction.  I think we should take this pretty small change that
improves things a bit in that direction, then others can continue to
propose further simplifications to our configuration in that area.

We could continue to do nothing, but then we've already been doing that
for some time and it hasn't changed things much.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Jim Nasby

On 1/4/16 12:53 PM, Atri Sharma wrote:
Please don't top-post.

On 5 Jan 2016 12:20 am, "Jim Nasby" > wrote:

On 1/4/16 12:07 PM, Atri Sharma wrote:

Hi All,

I wanted to check if it is possible to query a non catalog table
in backend.

I understand that cql interface is only for catalog querying
hence it is
not usable for this purpose per se.


AFAIK it's possible to do it with low level routines, but I think
unless you have a really good reason to go that route you're much
better off just using SPI. If it's good enough for plpgsql... :)


> I was wary to use SPI inside the executor for node evaluation functions.
> Does it seem safe?

Oh, inside the executor? Yeah, I doubt attempting SPI there would end 
well...


What exactly are you trying to do?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken lock management in policy.c.

2016-01-04 Thread Peter Geoghegan
On Mon, Jan 4, 2016 at 12:13 PM, Tom Lane  wrote:
> I've pushed an example based on your original test case.  Feel free
> to suggest improvements, although at this point they'll probably
> land in 9.5.1.

I think that that's a vast improvement. I probably should have pushed
for a worked out example myself, but there are only so many hours in
the day.

I've reviewed your changes, and have nothing further to add.

Thanks
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 12:55 PM, Stephen Frost  wrote:
> I'd like to be able to include, in both of those, a simple set of
> instructions for granting the necessary rights to the user who is
> running those processes.  A set of rights which an administrator can go
> look up and easily read and understand the result of those grants.  For
> example:
>
> check_postgres:
>
>   Most check_postgres sub-commands can be run without superuser
>   privileges by granting the pg_monitor role to the monitoring user:
>
>   GRANT pg_monitor TO monitor;
>
>   For information regarding the pg_monitor role, see:
>   http://www.postgresql.org/docs/current/static/roles/database-roles.html
>
> pgbackrest:
>
>   To run pgbackrest as a non-superuser and not the 'postgres' system
>   user, grant the pg_backup role to the backrest user and ensure the
>   backrest system user has read access to the database files (eg: by
>   having the system user be a member of the 'postgres' group):
>
>   GRANT pg_backup to backrest;
>
>   For information regarding the pg_backup role, see:
>   http://www.postgresql.org/docs/current/static/roles/database-roles.html

So I have two comments on this.

First, it's not really going to matter to users very much whether the
command to enable one of these features is a single GRANT command or a
short sequence of GRANT commands executed one after another.  So even
if we don't have roles that bundle together multiple permissions, you
will still be able to do this; you just might need more than one
command.  Probably, I'm guessing, not very many commands, but more
than one.

Second, I think it's really unlikely that you're going to maintain
perfect alignment between your predefined roles and the permissions
that third-party tools need over the course of multiple releases.  I
think the chances of that working out are just about zero.  Sure, you
can make the initial set of permissions granted to pg_backup match
exactly what pgbackrest needs, but it's a good bet that one of the six
or eight widely-used backup tools uses something that's not included
in that set.  And even if not, it doesn't require very much
imagination to suppose that some tool, maybe pgbackrest or maybe
something else, that comes along over the next few releases will
require some extra permission.  When that happens, will we include add
that permission to the pg_backup role, or not?  If we do, then we'll
be giving excess permissions to all the other backup tools that don't
need that new right, and maybe surprising users who upgrade without
realizing that some of their roles now have new rights.  If we don't,
then that tool won't be able to work without some additional
twiddling.  I just find it incredibly unlikely that every monitoring
tool, or every backup tool, or every admin tool will require exactly
the same set of permissions.

> I can see similar bits of documentation being included in pgAdmin or
> other tools.

...and pgAdmin is a particularly good example.  The permissions that
pgAdmin requires depend on what you want to do with it, and it does a
lot of things, and it might do more or fewer things in the future.
You can't even fairly assume that everyone wants to give the same
permissions to pgAdmin, let alone that some competing tool will
require the same set.

>> Being narrowly tied to a specific function, it's just a suboptimal spelling 
>> of
>> GRANT.  The gap in GRANT has distorted the design for these predefined roles.
>> I do not anticipate a sound design discussion about specific predefined roles
>> so long as the state of GRANT clouds the matter.
>
> I'm loathe to encourage any direct modification of catalog objects,
> even if it's just ACLs.  I've seen too many cases, as I imagine others
> have, of users destroying their databases or running into unexpected
> results when modifying the catalog.  The catalog modifications supported
> should be explicitly provided through other means rather than direct
> commands against the catalog objects.  I see the default roles approach
> as being similar to having:
>
> ALTER DATABASE db WITH CONNECTION LIMIT 5;
>
> instead of suggesting users issue:
>
> UPDATE DATABASE SET datconnlimit = 5 WHERE datname = 'db';

This doesn't make any sense to me.  Nobody was proposing issuing an
UPDATE against pg_database directly (and it would have to be
pg_database, not just database as you wrote here).  We're talking
about whether the user is going to GRANT pg_rotate_logfile TO ... or
GRANT EXECUTE ON FUNCTION pg_rotate_logfile() TO ... which is not the
same sort of thing at all.

>> What's this problem with syscache?  It sounds important.
>
> CREATE TABLE and DROP TABLE aren't going to care one bit if you have
> access to pg_class or not.  The same goes for basically everything else.
>
> If we really want to support ACLs on the catalog, we'd have to either
> caveat that none of the internal lookups will respect them or revamp
> SysCache and any other direct catalog access to do 

Re: [HACKERS] [PATCH] we have added support for box type in SP-GiST index

2016-01-04 Thread Alvaro Herrera
CC'ing Teodor because he's author of one of the patches.

Alexander Lebedev wrote:
> Hello, Hacker.

So, can we have a more thorough explanation of what is the point of this
patch?  If it is supposed to improve the performance of searching for
boxes, can we see measurements from some benchmark?  Do the patches
still apply, or do you need to rebase?  If so, please post an updated
version.

I'm setting this patch as Waiting on Author.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] bootstrap pg_shseclabel in relcache initialization

2016-01-04 Thread Alvaro Herrera
Adam Brightwell wrote:

> While working on an auth hook, I found that I was unable to access the
> pg_shseclabel system table while processing the hook.
[ ... ]
> Given that the shared relations currently exposed can also have
> security labels that can be used for auth purposes, I believe it makes
> sense to make those available as well.  I have attached a patch that
> adds this functionality for review/discussion.  If this functionality
> makes sense I'll add it to the commitfest.

So this looks like a bugfix that we should backpatch, but on closer
inspection it turns out that we need the rowtype OID to be fixed, which
it isn't unless this:

> -CATALOG(pg_shseclabel,3592) BKI_SHARED_RELATION BKI_WITHOUT_OIDS
> +CATALOG(pg_shseclabel,3592) BKI_SHARED_RELATION BKI_ROWTYPE_OID(4066) 
> BKI_WITHOUT_OIDS BKI_SCHEMA_MACRO

so I'm afraid this cannot be backpatched at all; if we did, the rowtype
wouldn't match for already-initdb'd installations.

I'm gonna push this to master only, which means you won't be able to
rely on pg_shseclabel until 9.6.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Pavel Stehule
2016-01-04 21:29 GMT+01:00 Robert Haas :

> On Mon, Jan 4, 2016 at 12:17 PM, Shulgin, Oleksandr
>  wrote:
> >> I'm also kind of wondering what the intended use case for this
> >> function is.  Why do we want it?  Do we want it?
> >
> > As suggested above a usecase could be like the following:
> >
> > SELECT relname FROM pg_class WHERE pg_relation_size(oid) >
> > pg_size_bytes('100 GB');
> >
> > I think it's neat and useful.
>
> But you could also write SELECT relname FROM pg_class WHERE
> pg_relation_size(oid) > 100 * 1024^3, which is actually fewer
> characters.  Maybe pg_size_bytes('100 GB') is easier for some people
> to remember than 100 * 1024^3, but I'm probably not one of those
> people.
>

I am really one who hasn't memory for numbers - "100GB" is much more
verbose for me. It is more clean in more complex maintenance queries. And
if you need short syntax, you can write own SQL function

CREATE OR REPLACE FUNCTION size(text) RETURNS bigint AS $$ SELECT
pg_size_bytes($1) $$ LANGUAGE sql;

then ... pg_relation_size(oid) > size('100GB')

Regards

Pavel


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


[HACKERS] Beginner hacker item: Fix to_reg*() input type

2016-01-04 Thread Jim Nasby
All the to_reg* functions in src/backend/utils/adt/regproc.c currently 
accept a cstring. Per [1], they should accept text. This should be a 
fairly simple change to make:


- Modify the functions in regproc.c. Take a look at how other text input 
functions work to see what needs to happen here (you'll want to use 
text_to_cstring() as part of that.)


- Modify the appropriate entries in src/include/catalog/pg_proc.h

[1] http://www.postgresql.org/message-id/29618.1451882...@sss.pgh.pa.us. 
Note that I mistakenly referred to reg*in functions in that email; it's 
the to_reg* functions that need to change.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel joins, and better parallel explain

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 4:50 AM, Dilip Kumar  wrote:
> I tried to create a inner table such that, inner table data don't fit in RAM
> (I created VM with 1GB Ram).
> Purpose of this is to make Disk scan dominant,
> and since parallel join is repeating the Disk Scan and hash table building
> of inner table, so there will be lot of Parallel I/O and it has to pay
> penalty.
>
> I think even though, inner table scanning and hash table building is
> parallel, but there will be lot of parallel I/O which will become
> bottleneck.

Hmm.  Because only 1/1024th of the hash table fits in work_mem, the
executor is going to have to write out all of the tuples that don't
belong to the first batch to a temporary file and then read them back
in.  So each backend is going to write essentially the entirety of t2
out to disk and then read it all back in again. The non-parallel case
will also write most of the table contents and then read them back in,
but at least it will only be doing that once rather than 7 times, so
it's not as bad.  Also, with fewer backends running, the non-parallel
case will have a bit more memory free for caching purposes.

> Do we need to consider the cost for parallel i/o also, i am not sure can we
> really do that... ?

It seems to me that the problem here is that you've set
max_parallel_degree to an unrealistically high value.  The query
planner is entitled to assume that the user has set
max_parallel_degree to a value which is small enough that the workers
won't be fighting too viciously with each other over resources.  It
doesn't really matter whether those resources are CPU resources or I/O
resources.  I'm wondering if your 1GB VM really even has as many as 7
vCPUs, because that would seem to be something of an unusual
configuration - and if it doesn't, then setting max_parallel_degree to
a value that high is certainly user error. Even if it does, it's still
not right to set the value as high as six unless the system also has
enough I/O bandwidth to accommodate the amount of I/O that you expect
your queries to generate, and here it seems like it probably doesn't.

To put that another way, you can always make parallel query perform
badly by telling it to use too many workers relative to the size of
the machine you have. This is no different than getting bad query
plans by configuring work_mem or effective_cache_size or any other
query planner GUC to a value that doesn't reflect the actual execution
environment.  I would only consider this to be a problem with the
parallel join patch if the chosen plan is slower even on a machine
that's big enough to justify setting max_parallel_degree=6 in the
first place.

While studying this example, I thought about whether we try to fix
this case by generating a partial hash join path only if we expect a
single batch, which would then cause the query planner to plan this
query some other way.  But after some thought I don't think that's the
right approach.  Multi-batch hash joins are in general quite a lot
slower than single-batch hash joins - and initial_cost_hashjoin knows
that - but if the system has adequate I/O bandwidth, that problem
shouldn't be any worse for a parallel hash join than it is for a
non-parallel hash join.  I think the reason you're losing here is
because the system either doesn't have as many vCPUs as the number of
worker processes you are giving it, or it has a very limited amount of
I/O bandwidth that can't handle multiple processes doing sequential
I/O at the same time - e.g. a single spinning disk, or a single SSD
plus a bunch of virtualization overhead.  But that need not be the
case.  On a system where temporary files are written to a filesystem
backend by an array of disks, you might well get some I/O parallelism.
Of course if we experiment and find that doesn't work out well for
some reason, then we've got a problem, but it doesn't seem implausible
that it might be just fine.

Another interesting question about this particular query is whether a
merge join would have been faster, especially given all Peter
Geoghegan's work to improve sort performance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE behind-the-scenes effects' CONTEXT

2016-01-04 Thread Alvaro Herrera
Pavel Stehule wrote:
> 2015-10-05 0:08 GMT+02:00 Marko Tiikkaja :
> 
> > In the past I've found the error message in cases such as this somewhat
> > less helpful than it could be:
> >
> > =# CREATE TABLE qqq (a int);
> > =# CREATE UNIQUE INDEX IF NOT EXISTS qqq_a_idx ON qqq(a);
> > =# ALTER TABLE qqq ALTER COLUMN a TYPE json USING NULL;
> > ERROR:  data type json has no default operator class for access method
> > "btree"
> > HINT:  You must specify an operator class for the index or define a
> > default operator class for the data type.
> >
> > The attached patch adds a CONTEXT line to index and constraint rebuilds,
> > e.g:
> >
> >   CONTEXT:  while rebuilding index qqq_a_idx
> 
> I prefer using DETAIL field for this case.

I think DETAIL doesn't work for this; the advantage of CONTEXT is that
it can be set not at the location of the actual error but at the calling
code, by setting up a callback.  I think Marko got that right.

I'd add some quoting to the object name in the message, and make sure we
don't crash if the name isn't set up (i.e. test for nullness).  But
also, why do we set the name only in ATPostAlterTypeParse()?  Maybe
there are more ALTER TABLE subcommands that should be setting something
up?  In cases where multiple subcommands are being run, it might be
useful to see which one caused a certain error message.

I think some additional tests wouldn't hurt.

I await feedback from Simon Riggs, who set himself up as reviewer a
couple of days ago.  Simon, do you also intend to be committer?  If so,
please mark yourself as such in the commitfest app.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 3:48 PM, Alvaro Herrera  wrote:
> Robert Haas wrote:
>
>> But you could also write SELECT relname FROM pg_class WHERE
>> pg_relation_size(oid) > 100 * 1024^3, which is actually fewer
>> characters.  Maybe pg_size_bytes('100 GB') is easier for some people
>> to remember than 100 * 1024^3, but I'm probably not one of those
>> people.
>
> Nah, that might work for geek types, but I doubt it's the preferred
> spelling for most people.  I think the proposal is quite reasonable.
>
> If we were only catering for people who can do 2^10 arithmetic off the
> top of their heads, we wouldn't have pg_size_pretty at all, would we?

Well, I don't know what 100 * 1024^3 is off the top of my head, but I
know that I can compute it by typing exactly that.  So I think one
direction is easier than the other.  However, IJWH.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 12:17 PM, Shulgin, Oleksandr
 wrote:
>> I'm also kind of wondering what the intended use case for this
>> function is.  Why do we want it?  Do we want it?
>
> As suggested above a usecase could be like the following:
>
> SELECT relname FROM pg_class WHERE pg_relation_size(oid) >
> pg_size_bytes('100 GB');
>
> I think it's neat and useful.

But you could also write SELECT relname FROM pg_class WHERE
pg_relation_size(oid) > 100 * 1024^3, which is actually fewer
characters.  Maybe pg_size_bytes('100 GB') is easier for some people
to remember than 100 * 1024^3, but I'm probably not one of those
people.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extracting fields from 'infinity'::TIMESTAMP[TZ]

2016-01-04 Thread Alvaro Herrera
Vitaly Burovoy wrote:

> Majority of the votes for NULL for "other things" except epoch.
> Nobody answers about differences between monotonic and oscillating values.
> 
> I suppose behavior of monotonic values (julian, century, decade,
> isoyear, millennium and year) should be the same as for epoch (which
> obviously also monotonic value).
> Proposed patch has that behavior: +/-infinity for epoch, julian,
> century, decade, isoyear, millennium and year; NULL for other fields.

It seems we got majority approval on the design of this patch, and no
disagreement; the last submitted version appears to implement that.
There's no documentation change in the patch though.  I'm marking it as
Waiting on Author; please resubmit with necessary doc changes.

Thanks,

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Broken lock management in policy.c.

2016-01-04 Thread Tom Lane
I wrote:
> I'll go draft something up ...

I've pushed an example based on your original test case.  Feel free
to suggest improvements, although at this point they'll probably
land in 9.5.1.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] count_nulls(VARIADIC "any")

2016-01-04 Thread Pavel Stehule
Hi

2016-01-04 5:49 GMT+01:00 Jim Nasby :

> On 1/3/16 10:23 PM, Pavel Stehule wrote:
>
>> Hi
>>
>> 2016-01-03 22:49 GMT+01:00 Jim Nasby > >:
>>
>> On 1/3/16 2:37 PM, Pavel Stehule wrote:
>>
>> +   /* num_nulls(VARIADIC NULL) is defined as NULL */
>> +   if (PG_ARGISNULL(0))
>> +   return false;
>>
>>
>> Could you add to the comment explaining why that's the desired
>> behavior?
>>
>>
>> This case should be different than num_nulls(VARIADIC ARRAY[..]) - this
>> situation is really equivalent of missing data and NULL is correct
>> answer. It should not be too clean in num_nulls, but when it is cleaner
>> for num_notnulls. And more, it is consistent with other variadic
>> functions in Postgres: see concat_internal and text_format.
>>
>
> Makes sense, now that you explain it. Which is why I'm thinking it'd be
> good to add that explanation to the comment... ;)
>
>
>> Assert(OidIsValid(get_base_element_type(get_fn_expr_argtype(fcinfo->flinfo,
>> 0;
>>
>>
>> Erm... is that really the way to verify that what you have is an
>> array? ISTM there should be a macro for that somewhere...
>>
>>
>> really, it is. It is used more time. Although I am not against some
>> macro, I don't think so it is necessary. The macro should not be too
>> shorter than this text.
>>
>
> Well, if there's other stuff doing that... would be nice to refactor that
> though.
>
> For brevity and example sake it'd probably be better to just use the
>> normal iterator, unless there's a serious speed difference?
>>
>>
>> The iterator does some memory allocations and some access to type cache.
>> Almost all work of iterator is useless for this case. This code is
>> developed by Marko, but I agree with this design. Using the iterator is
>> big gun for this case. I didn't any performance checks, but it should be
>> measurable  for any varlena arrays.
>>
>
> Makes sense then.
>
>
+ enhanced comment
+ rewritten regress tests

Regards

Pavel


> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 8ef9fce..fd7890e
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 182,188 

  

!Comparison Operators
  
 
  comparison
--- 182,188 

  

!Comparison Functions and Operators
  
 
  comparison
***
*** 191,200 
  
 
  The usual comparison operators are available, shown in .
 
  
!
  Comparison Operators
  
   
--- 191,200 
  
 
  The usual comparison operators are available, shown in .
 
  
!
  Comparison Operators
  
   
***
*** 437,442 
--- 437,479 
 
  -->
  
+   
+ Comparison Functions
+ 
+  
+   
+Function
+Description
+Example
+Example Result
+   
+  
+  
+   
+
+  
+   num_notnulls
+  
+  num_notnulls(VARIADIC "any")
+
+Returns the number of not NULL input arguments
+num_nulls(1, NULL, 2)
+2
+   
+   
+
+  
+   num_nulls
+  
+  num_nulls(VARIADIC "any")
+
+Returns the number of NULL input arguments
+num_nulls(1, NULL, 2)
+1
+   
+  
+ 
+

  

*** table2-mapping
*** 10307,10313 


 The standard comparison operators shown in   are available for
 jsonb, but not for json. They follow the
 ordering rules for B-tree operations outlined at .
--- 10344,10350 


 The standard comparison operators shown in   are available for
 jsonb, but not for json. They follow the
 ordering rules for B-tree operations outlined at .
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
new file mode 100644
index 6a306f3..35810d1
*** a/src/backend/utils/adt/misc.c
--- b/src/backend/utils/adt/misc.c
***
*** 43,48 
--- 43,160 
  
  #define atooid(x)  ((Oid) strtoul((x), NULL, 10))
  
+ /*
+  * Collect info about NULL arguments. Returns true when result values
+  * are valid.
+  */
+ static bool
+ count_nulls(FunctionCallInfo fcinfo,
+ int32 *nargs, int32 *nulls)
+ {
+ 	int32 count = 0;
+ 	int i;
+ 
+ 	if (get_fn_expr_variadic(fcinfo->flinfo))
+ 	{
+ 		ArrayType  *arr;
+ 		int ndims, nitems, *dims;
+ 		bits8 *bitmap;
+ 		int bitmask;
+ 
+ 		/*
+ 		 * When parameter with packed variadic arguments is NULL, we
+ 		 * cannot to identify number of variadic argumens (NULL
+ 		 * or not NULL), then the correct result is NULL. This behave
+ 		 * is consistent with other variadic 

Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Andres Freund
On 2016-01-03 13:57:18 -0500, Tom Lane wrote:
> Done, we'll soon see what the buildfarm thinks.

Thanks.

I wonder if we ought to backport this further: e.g. walsender
continously uses nonblocking sockets via pq_getbyte_if_available(). On
the other hand I can't immediately see a problem with that, besides
differing messages on windows/the rest of the world.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel joins, and better parallel explain

2016-01-04 Thread Dilip Kumar
On Thu, Dec 24, 2015 at 4:45 AM, Robert Haas  wrote:

> On Wed, Dec 23, 2015 at 2:34 AM, Dilip Kumar 
> wrote:
> > Yeah right, After applying all three patches this problem is fixed, now
> > parallel hash join is faster than normal hash join.
> >
> > I have tested one more case which Amit mentioned, I can see in that case
> > parallel plan (parallel degree>= 3) is still slow, In Normal case it
> selects
> > "Hash Join" but in case of parallel worker > 3 it selects Parallel "Nest
> > Loop Join" which is making it costlier.
>
> While investigating this problem, I discovered that I can produce a
> regression even on unpatched master:
>
> yeah, right..


> But this is not entirely the fault of the parallel query code.  If you
> force a seqscan-over-seqscan plan in the non-parallel cast, it
> estimates the join cost as 287772.00, only slightly more than the
> 261522.02 cost units it thinks a non-parallel hash join will cost.  In
> fact, however, the non-parallel hash join runs in 1.2 seconds and the
> non-parallel nested loop takes 46 seconds.
>

right..


>
> Updated patch attached.
>
>
Thanks for the updated patch...

I have found regression in one more scenario, in hash Join..

Scenario:
--
I tried to create a inner table such that, inner table data don't fit in
RAM (I created VM with 1GB Ram).
Purpose of this is to make Disk scan dominant,
and since parallel join is repeating the Disk Scan and hash table building
of inner table, so there will be lot of Parallel I/O and it has to pay
penalty.

I think even though, inner table scanning and hash table building is
parallel, but there will be lot of parallel I/O which will become
bottleneck.

Do we need to consider the cost for parallel i/o also, i am not sure can we
really do that... ?

Note: I have tested with 1GB RAM machine and 8GB RAM machine.
Regression in 8GB RAM machine is less compared to 1GB RAM..


create table t1 (c1 int, c2 int, c3 text);

create table t2 (c1 int, c2 int, c3 text);

insert into t1 values(generate_series(1,1),
generate_series(1,1), repeat('x', 100));

insert into t2 values(generate_series(1,4800),
generate_series(1,4800), repeat('x', 5));

analyze t1;

analyze t2;

Test with: 1GB RAM

-

postgres=# set max_parallel_degree=0;

SET

postgres=# explain analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 = t2.c1
AND t2.c2 + t1.c1 > 100;

 QUERY PLAN


-

 Aggregate  (cost=12248485.55..12248485.56 rows=1 width=0) (actual
time=147490.455..147490.455 rows=1 loops=1)

   ->  Hash Join  (cost=1526963.25..12208485.47 rows=1633 width=0)
(actual time=26652.871..143368.989 rows=4750 loops=1)

 Hash Cond: (t1.c1 = t2.c1)

 Join Filter: ((t2.c2 + t1.c1) > 100)

 Rows Removed by Join Filter: 50

 ->  Seq Scan on t1  (cost=0.00..2742412.72 rows=15072 width=4)
(actual time=130.580..40127.004 rows=1 loops=1)

 ->  Hash  (cost=739461.00..739461.00 rows=48000100 width=8)
(actual time=26500.439..26500.439 rows=4800 loops=1)

   Buckets: 131072  Batches: 1024  Memory Usage: 2856kB

   ->  Seq Scan on t2  (cost=0.00..739461.00 rows=48000100
width=8) (actual time=0.039..11402.343 rows=4800 loops=1)

 Planning time: 0.410 ms

 Execution time: 147490.553 ms

(11 rows)

postgres=# set max_parallel_degree=6;

SET

postgres=# explain analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 = t2.c1
AND t2.c2 + t1.c1 > 100;

   QUERY
PLAN



 Aggregate  (cost=4969933.98..4969933.99 rows=1 width=0) (actual
time=386024.487..386024.488 rows=1 loops=1)

   ->  Gather  (cost=1527963.25..4929933.89 rows=1633 width=0) (actual
time=199190.138..379487.861 rows=4750 loops=1)

 Number of Workers: 6

 ->  Hash Join  (cost=1526963.25..3328930.59 rows=1633 width=0)
(actual time=178885.161..320724.381 rows=6857136 loops=7)

   Hash Cond: (t1.c1 = t2.c1)

   Join Filter: ((t2.c2 + t1.c1) > 100)

   Rows Removed by Join Filter: 7

   ->  Parallel Seq Scan on t1  (cost=0.00..421909.65
rows=15385396 width=4) (actual time=106.403..11735.643 rows=14285714
loops=7)

   ->  Hash  (cost=739461.00..739461.00 rows=48000100 width=8)
(actual time=177959.433..177959.433 rows=4800 loops=7)

 Buckets: 131072  Batches: 1024  Memory Usage: 2856kB

 ->  Seq Scan on t2  (cost=0.00..739461.00
rows=48000100 width=8) (actual time=0.022..20778.693 rows=4800 loops=7)

 Planning time: 0.372 

Re: [HACKERS] Multi-tenancy with RLS

2016-01-04 Thread Haribabu Kommi
On Mon, Jan 4, 2016 at 8:34 PM, Amit Langote
 wrote:
> On 2016/01/04 14:43, Haribabu Kommi wrote:
>>>
>>> Here I attached new series of patches with a slightly different approach.
>>> Instead of creating the policies on the system catalog tables whenever
>>> the catalog security command is executed, just enable row level security
>>> on the system catalog tables. During the relation build, in
>>> RelationBuildRowSecurity function, if it is a system relation, frame the
>>> policy using the policy query which we earlier used to create by parsing it.
>>>
>>> With the above approach, in case of any problems in the policy, to use
>>> the corrected policy, user just needs to replace the binaries. whereas in
>>> earlier approach, either pg_upgrade or disabling and enabling of catalog
>>> security is required.
>>>
>>> Currently it is changed only for shared system catalog tables and also the
>>> way of enabling catalog security on shared system catalog tables is through
>>> initdb only. This also can be changed later. I will do similar changes for
>>> remaining catalog tables.
>>>
>>> Any comments on the approach?
>>
>> Instead of creating policies during the "alter database" command for database
>> catalog tables, generating at relation building is leading to an
>> infinite recursion
>> loop because of transformExpr call for the qual. Any ideas to handle the 
>> same?
>
> I tried your latest patch to see what may have caused the infinite
> recursion. The recursion occurs during backend startup itself, right?
>
> ISTM, doing transformWhereClause during RelationCacheInitializePhase3()
> would not work. Things like operators, functions within the policy qual
> require namespace lookup which down the line would call
> RelationBuildRowSecurity for pg_namespace build and so on thus causing the
> infinite recursion. Perhaps, it would have to be done in a separate phase
> after the phase 3 but I'm not sure.

Thanks for the test. Yes, the issue happens at backend startup itself.
I will give a try by separating the initialization of security
policies after init phase 3.

> I wonder why do the policy quals need to be built from scratch every time
> in RelationBuildRowSecurity for system tables (shared or otherwise)? Why
> not just read from the pg_policy catalog like regular relations if ALTER
> DATABASE CATALOG SECURITY TRUE already created those entries? Maybe I
> missing something though.

Yes, creating policies at start and using them every time when
relation is building works
until there is no problem is found in the policies. The row level
security policies on catalog
tables are created automatically when user enables catalog security,
so user don't have
any control on these policies.

In case if we found any problem in these policies, later if we want to
correct them, for
shared system catalog tables policies user needs to do a pg_upgrade to
correct them.
And for the other catalog tables, user needs to disable and enable the
catalog security
on all databases.

Instead of the above, if we built the policy at run time always for
catalog tables, user
can just replace binaries with latest works. Currently it is working
fine for shared system
catalog tables. I will give a try by separating
RelationBuildRowSecurity from init phase 3.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] \x auto and EXPLAIN

2016-01-04 Thread Shulgin, Oleksandr
On Sun, Jan 3, 2016 at 6:43 PM, Tom Lane  wrote:

> Andreas Karlsson  writes:
> > psql's "\x auto" is a nice feature, but it is made much less useful in
> > my opinion due to the expanded output format making query plans
> > unreadable (and query plans often end up using expanded display due to
> > their width). I think we should never use the expanded format for
> > EXPLAIN output in the "\x auto" mode, since even when the wrapped format
> > is used the query plans are very hard to read.
>
> > I see two ways to fix this.
>
> > 1) Never use expanded display for the case where there is only one
> > column. There seems to me like there is little value in using expanded
> > display for when you only have one column, but I may be missing some use
> > case here.
>
> > 2) Explicitly detect (for example based on the headers) that the result
> > is a query plan and if so disable expanded display.
>
> The second of these seems pretty bletcherous --- for one thing, it might
> fall foul of localization attempts.  However, I could see the argument
> for not using expanded mode for any single-column output.
>

+1 to option #1, I sympathize to this as an annoyance that can be easily
fixed.

--
Alex


Re: [HACKERS] Proposal: SET ROLE hook

2016-01-04 Thread Pavel Stehule
Hi

2015-10-16 18:20 GMT+02:00 Joe Conway :

> In many environments there is a policy requiring users to login using
> unprivileged accounts, and then escalate their privileges if and when
> they require it. In PostgreSQL this could be done by granting the
> superuser role to an unprivileged user with noinherit, and then making
> the superuser role nologin. Something like:
>
> 8<-
> psql -U postgres
> create user joe noinherit;
> grant postgres to joe;
> alter user postgres nologin;
> \q
> psql -U joe
> -- do stuff *not requiring* escalated privs
> set role postgres;
> -- do stuff *requiring* escalated privs
> reset role;
> 8<-
>
> One of the problems with this is we would ideally like to know whenever
> joe escalates himself to postgres. Right now that is not really possible
> without doing untenable things such as logging every statement.
>
> In order to address this issue, I am proposing a new SET ROLE hook. The
> attached patch (role-esc-hook.diff) is I believe all we need. Then
> extension authors could implement logging of privilege escalation.
>
> A proof of concept extension patch is also attached. That one is not
> meant to be applied, just illustrates one potential use of the hook. I
> just smashed it on top of passwordcheck for the sake of convenience.
>
> With both patches applied, the following scenario:
> 8<
> psql -U joe postgres
> psql (9.6devel)
> Type "help" for help.
>
> postgres=> set role postgres;
> SET
> postgres=# select rolname, rolpassword from pg_authid;
>  rolname  | rolpassword
> --+-
>  joe  |
>  postgres |
> (2 rows)
>
> postgres=# set log_statement = none;
> SET
> postgres=# reset role;
> RESET
> 8<
>
> Generates the following in the log:
>
> 8<
> LOG:  Role joe transitioning to Superuser Role postgres
> STATEMENT:  set role postgres;
> LOG:  statement: select rolname, rolpassword from pg_authid;
> LOG:  statement: set log_statement = none;
> LOG:  Superuser Role postgres transitioning to Role joe
> STATEMENT:  reset role;
> 8<
>
> Note that we cannot prevent joe from executing
>   set log_statement = none;
> but we at least get the evidence in the log and can ask joe why he felt
> the need to do that. We could also set up alerts based on the logged
> events, etc.
>
> This particular hook will not capture role changes due to SECURITY
> DEFINER functions, but I think that is not only ok but preferred.
> Escalation during a SECURITY DEFINER function is a preplanned sanctioned
> event, unlike an ad hoc unconstrained role change to superuser. And
> given the demo patch, we could see any SECURITY DEFINER function created
> by the superuser when it gets created in the log (which again is subject
> to auditing, alerts, etc.)
>
> Ultimately I would also like to see a general hook available which would
> fire for all changes to GUC settings, but I think this one is still very
> useful as it is highly targeted.
>
> Comments?
>

I read discussion in this thread and I am thinking so creating hook is the
most simple and workable solution.

Personally, I am not sure if missing support SECURE DEFINER function is ok.
When you do some secure audit, probably any role change can be interesting.
This situation can be distinguished by another hook parameter.

Support of event triggers can be other topic, nice to have it but not
necessary in first moment.

Regards

Pavel


>
> Thanks,
>
> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>


[HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread amul sul
Hi,

In inheritance, child column's pg_attribute.attislocal flag not getting 
updated, if it is inherited using ALTER TABLE  INHERIT .

Due to this, if we try to drop column(s) from parent table, which are not 
getting drop from child.
Attached herewith is quick patch fixing this issue.


--Demonstration:
--
CREATE TABLE p1 (a int , b int, c int, d int);

CREATE TABLE c1 () inherits (p1);CREATE TABLE c2 (a int , b int, c int, d int);


--Drop parent's column
ALTER TABLE p1 DROP COLUMN b;
ALTER TABLE p1 DROP COLUMN c;
ALTER TABLE p1 DROP COLUMN d;


postgres=# \d p1
  Table "public.p1"
 Column |  Type   | Modifiers 
+-+---
 a  | integer | 
Number of child tables: 2 (Use \d+ to list them.)

postgres=# \d c1
  Table "public.c1"
 Column |  Type   | Modifiers 
+-+---
 a  | integer | 
Inherits: p1

postgres=# \d c2
  Table "public.c2"
 Column |  Type   | Modifiers 
+-+---
 a  | integer | 
 b  | integer | 
 c  | integer | 
 d  | integer | 
Inherits: p1


--
You can see columns are not dropped from child c2 table, which we have 
inherited using ALTER command.

Regards,
Amul Sul

MergeAttributesIntoExisting_fix.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Shulgin, Oleksandr
On Wed, Dec 30, 2015 at 8:28 PM, Pavel Stehule 
wrote:

>
>
> 2015-12-30 17:33 GMT+01:00 Robert Haas :
>
>> On Mon, Dec 28, 2015 at 8:45 AM, Shulgin, Oleksandr
>>  wrote:
>> > I didn't check out earlier versions of this patch, but the latest one
>> still
>> > changes pg_size_pretty() to emit PB suffix.
>> >
>> > I don't think it is worth it to throw a number of changes together like
>> > that.  We should focus on adding pg_size_bytes() first and make it
>> > compatible with both pg_size_pretty() and existing GUC units: that is
>> > support suffixes up to TB and make sure they have the meaning of powers
>> of
>> > 2^10, not 10^3.  Re-using the table present in guc.c would be a plus.
>> >
>> > Next, we could think about adding handling of PB suffix on input and
>> output,
>> > but I don't see a big problem if that is emitted as 1024TB or the user
>> has
>> > to specify it as 1024TB in a GUC or argument to pg_size_bytes(): an
>> minor
>> > inconvenience only.
>>
>> +1 to everything in this email.
>>
>
> so I removed support for PB and SI units. Now the
> memory_unit_conversion_table is shared.
>

Looks better, thanks.

I'm not sure why the need to touch the regression test for pg_size_pretty():

!10.5 | 10.5 bytes | -10.5 bytes
!  1000.5 | 1000.5 bytes   | -1000.5 bytes
!   100.5 | 977 kB | -977 kB
!10.5 | 954 MB | -954 MB
! 1.5 | 931 GB | -931 GB
!  1000.5 | 909 TB | -909 TB

A nitpick, this loop:

+ while (*cp)
+ {
+ if ((isdigit(*cp) || *cp == '.') && ndigits < MAX_DIGITS)
+ digits[ndigits++] = *cp++;
+ else
+ break;
+ }

would be a bit easier to parse if spelled as:

+ while (*cp && (isdigit(*cp) || *cp == '.') && ndigits < MAX_DIGITS)
+ digits[ndigits++] = *cp++;

On the other hand, this seems to truncate the digits silently:

+ digits[ndigits] = '\0';

I don't think we want that, e.g:

postgres=# select pg_size_bytes('9223372036854775807.9');
ERROR:  invalid unit "9"
HINT:  Valid units for this parameter are "kB", "MB", "GB", and "TB".

I think making a mutable copy of the input string and truncating it before
passing to numeric_in() would make more sense--no need to hard-code
MAX_DIGITS.  The same goes for hard-coding MAX_UNIT_LEN, e.g. compare the
following two outputs:

postgres=# select pg_size_bytes('1 KiB');
ERROR:  invalid unit "KiB"
HINT:  Valid units for this parameter are "kB", "MB", "GB", and "TB".

postgres=# select pg_size_bytes('1024 bytes');
ERROR:  invalid format

I believe we should see a similar error message and a hint in the latter
case.  (No, I don't think we should add support for 'bytes' as a unit, not
even for "compatibility" with pg_size_pretty()--for one, I don't think it
would be wise to expect pg_size_bytes() to be able to deparse *every*
possible output produced by pg_size_pretty() as it's purpose is
human-readable display; also, pg_size_pretty() can easily produce output
that doesn't fit into bigint type, or is just negative)

Code comments and doc change need proof-reading by a native English
speaker, which I am not.

--
Alex


Re: [HACKERS] Remove Windows crash dump support?

2016-01-04 Thread Andres Freund
On 2016-01-04 16:39:22 +0900, Michael Paquier wrote:
> So I think that it is still useful for debugging code
> paths running custom code, even if we consider Postgres as rock-solid
> on Windows.

Given the state of e.g. the socket using code for windows I personally
certainly don't consider it rock solid.

Either way, since there were a few people interested in keeping the
support, and nobody really had any arguments besids "seems unused" for
removing it, this seems like a pretty clear cut case.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding pin scan during btree vacuum

2016-01-04 Thread Andres Freund
On 2016-01-03 15:40:01 +, Simon Riggs wrote:
> I'm happy with this being a simple patch now, not least because I would
> like to backpatch this to 9.4 where catalog scans became MVCC.
> 
> A backpatch is warranted because it is a severe performance issue with
> replication and we can fix that before 9.5 hits the streets.
> 
> I'll be doing some more testing and checking, so not in a rush.

This seems like a might subtle thing to backpatch. If we really want to
go there, ISTM that the relevant code should stew in an unreleased
branch for a while, before being backpatched.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword classifications

2016-01-04 Thread Simon Riggs
On 1 January 2016 at 19:06, Tom Lane  wrote:

> I wrote:
> > Now, one line of thought here is that flatten_reloptions() is out of its
> > mind to not be worrying about quoting the reloption values.  And perhaps
> > it is, but I think if we go that direction, we may be fighting similar
> > fires for awhile to come.  psql's describe.c, for example, doesn't worry
> > about quoting anything when printing reloptions, and there's likely
> > similar code in third-party clients.  Also, a solution like this would
> > do nothing for existing dump files.
>
> > The other line of thought is that we're already making an effort to allow
> > any keyword to appear as the value of a def_arg, and maybe we should try
> > to make that work 100% instead of only 90%.
>
> After further thought I believe that the right thing to do is pursue both
> these lines of attack.  Adding quoting in flatten_reloptions() seems like
> a safely back-patchable fix for the original complaint, and it's really
> necessary anyway for reloption values that don't look like either an
> identifier or a number.  The grammar allows any arbitrary string constant
> to be the original form of a reloption, and we have no good reason to
> assume that extension modules will constrain their custom reloptions to
> be one or the other.  (I'm thinking we'd better be prepared to
> double-quote the option names, too, just in case.)
>
> The grammar fixes seem like a good thing to do in the long run, too,
> but there's little need to risk back-patching it since accepting
> col_name_keywords without quoting would be mostly a convenience issue.


All seems reasonable.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] Multi-tenancy with RLS

2016-01-04 Thread Amit Langote
On 2016/01/04 14:43, Haribabu Kommi wrote:
>>
>> Here I attached new series of patches with a slightly different approach.
>> Instead of creating the policies on the system catalog tables whenever
>> the catalog security command is executed, just enable row level security
>> on the system catalog tables. During the relation build, in
>> RelationBuildRowSecurity function, if it is a system relation, frame the
>> policy using the policy query which we earlier used to create by parsing it.
>>
>> With the above approach, in case of any problems in the policy, to use
>> the corrected policy, user just needs to replace the binaries. whereas in
>> earlier approach, either pg_upgrade or disabling and enabling of catalog
>> security is required.
>>
>> Currently it is changed only for shared system catalog tables and also the
>> way of enabling catalog security on shared system catalog tables is through
>> initdb only. This also can be changed later. I will do similar changes for
>> remaining catalog tables.
>>
>> Any comments on the approach?
> 
> Instead of creating policies during the "alter database" command for database
> catalog tables, generating at relation building is leading to an
> infinite recursion
> loop because of transformExpr call for the qual. Any ideas to handle the same?

I tried your latest patch to see what may have caused the infinite
recursion. The recursion occurs during backend startup itself, right?

ISTM, doing transformWhereClause during RelationCacheInitializePhase3()
would not work. Things like operators, functions within the policy qual
require namespace lookup which down the line would call
RelationBuildRowSecurity for pg_namespace build and so on thus causing the
infinite recursion. Perhaps, it would have to be done in a separate phase
after the phase 3 but I'm not sure.

I wonder why do the policy quals need to be built from scratch every time
in RelationBuildRowSecurity for system tables (shared or otherwise)? Why
not just read from the pg_policy catalog like regular relations if ALTER
DATABASE CATALOG SECURITY TRUE already created those entries? Maybe I
missing something though.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]WIP: Covering + unique indexes.

2016-01-04 Thread David Rowley
On 2 December 2015 at 01:53, Anastasia Lubennikova <
a.lubennik...@postgrespro.ru> wrote:

> Finally, completed patch "covering_unique_3.0.patch" is here.
> It includes the functionality discussed above in the thread, regression
> tests and docs update.
> I think it's quite ready for review.
>

Hi Anastasia,

I've maybe mentioned before that I think this is a great feature and I
think it will be very useful to have, so I've signed up to review the
patch, and below is the results of my first pass from reading the code.
Apologies if some of the things seem like nitpicks, I've basically just
listed everything I've noticed during, no matter how small.

- int natts = rel->rd_rel->relnatts;
+ int nkeyatts = rel->rd_index->indnkeyatts;
+
+ Assert (rel->rd_index != NULL);
+ Assert(rel->rd_index->indnatts != 0);
+ Assert(rel->rd_index->indnkeyatts != 0);
+
  SnapshotData SnapshotDirty;

There's a couple of problems here. According to [1] the C code must follow
the C89 standard, but this appears not to. You have some statements before
the final variable declaration, and also there's a problem as you're
Asserting that rel->rd_index != NULL after already trying to dereference it
in the assignment to nkeyatts, which makes the Assert() useless.

+   An access method that supports this feature sets
pg_am.amcanincluding true.

I don't think this belongs under the "Index Uniqueness Checks" title. I
think the "Columns included with clause INCLUDING  aren't used to enforce
uniqueness." that you've added before it is a good idea, but perhaps the
details of amcanincluding are best explained elsewhere.

-   indexed columns are equal in multiple rows.
+   indexed columns are equal in multiple rows. Columns included with clause
+   INCLUDING  aren't used to enforce constraints (UNIQUE, PRIMARY KEY,
etc).

 is missing around "INCLUDING" here. Perhaps this part needs more
explanation in a new paragraph. Likely it's good idea to also inform the
reader that the columns which are part of the INCLUDING clause exist only
to allow the query planner to skip having to perform a lookup to the heap
when all of the columns required for the relation are present in the
indexed columns, or in the INCLUDING columns. I think you should explain
that the index can also only be used as pre-sorted input for columns which
are in the "indexed columns" part of the index, and the INCLUDING column
are not searchable as index quals.

--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -137,7 +137,6 @@ CheckIndexCompatible(Oid oldId,
  Relation irel;
  int i;
  Datum d;
-
  /* Caller should already have the relation locked in some way. */

You've accidentally removed an empty line here.

+ /*
+ * All information about key and included cols is in numberOfKeyAttributes
number.
+ * So we can concat all index params into one list.
+ */
+ stmt->indexParams = list_concat(stmt->indexParams,
stmt->indexIncludingParams);

I think this should be explained with a better comment, perhaps:

/*
 * We append any INCLUDING columns onto the indexParams list so that
 * we have one list with all columns. Later we can determine which of these
 * are indexed, and which are just part of the INCLUDING list by check the
list
 * position. A list item in a position less than ii_NumIndexKeyAttrs is
part of
 * the indexed columns, and anything equal to and over is part of the
 * INCLUDING columns.
 */

+ stack = _bt_search(rel, IndexRelationGetNumberOfKeyAttributes(rel),
itup_scankey,

This line is longer than 80 chars.

+ /* Truncate nonkey attributes when inserting on nonleaf pages */
+ if (wstate->index->rd_index->indnatts !=
wstate->index->rd_index->indnkeyatts)
+ {
+ BTPageOpaque pageop = (BTPageOpaque) PageGetSpecialPointer(npage);
+
+ if (!P_ISLEAF(pageop))
+ {
+ itup = index_reform_tuple(wstate->index, itup,
wstate->index->rd_index->indnatts, wstate->index->rd_index->indnkeyatts);
+ itupsz = IndexTupleDSize(*itup);
+ itupsz = MAXALIGN(itupsz);
+ }
+ }

A few of the lines here are over 80 chars.


+This clause specifies additional columns to be appended to the set
of index columns.
+Included columns don't support any constraints (UNIQUE,
PRMARY KEY, EXCLUSION CONSTRAINT).
+These columns can improve the performance of some queries  through
using advantages of index-only scan
+(Or so called covering indexes. Covering
index is the index that
+covers all columns required in the query and prevents a table
access).
+Besides that, included attributes are not stored in index inner
pages.
+It allows to decrease index size and furthermore it provides a way
to extend included
+columns to store atttributes without suitable opclass (not
implemented yet).
+This clause could be applied to both unique and nonunique indexes.
+It's possible to have non-unique covering index, which behaves as
a regular
+multi-column index with a bit smaller index-size.
+Currently, only the 

Re: [HACKERS] POC: Cache data in GetSnapshotData()

2016-01-04 Thread Andres Freund
On 2015-12-19 22:47:30 -0800, Mithun Cy wrote:
> After some analysis I saw writing to shared memory to store shared snapshot
> is not protected under exclusive write lock, this leads to memory
> corruptions.
> I think until this is fixed measuring the performance will not be much
> useful.

I think at the very least the cache should be protected by a separate
lock, and that lock should be acquired with TryLock. I.e. the cache is
updated opportunistically. I'd go for an lwlock in the first iteration.

If that works nicely we can try to keep several 'snapshot slots' around,
and only lock one of them exclusively. With some care users of cached
snapshots can copy the snapshot, while another slot is updated in
parallel. But that's definitely not step 1.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 4:56 PM, Stephen Frost  wrote:
>> First, it's not really going to matter to users very much whether the
>> command to enable one of these features is a single GRANT command or a
>> short sequence of GRANT commands executed one after another.  So even
>> if we don't have roles that bundle together multiple permissions, you
>> will still be able to do this; you just might need more than one
>> command.  Probably, I'm guessing, not very many commands, but more
>> than one.
>
> I disagree.  I find that it does make a difference to users to be using
> a well documented and clear approach over one which involves fiddling
> with the individual pieces to get everything to work, and if a new
> function comes along that is useful for backup users, that would have to
> also be granted, even if it would clearly be useful to a backup role.

How is that a fair way to characterize the discussion here?  Just
because you have to execute several SQL commands instead of one
doesn't turn a "well-documented and clear approach" into something
that involves "fiddling with individual pieces".  Cutting and pasting
a sequence of 3 or 4 SQL commands into a psql window is not a lot
harder than copying and pasting a single one, and does not turn a good
approach into a shambles.

>> Second, I think it's really unlikely that you're going to maintain
>> perfect alignment between your predefined roles and the permissions
>> that third-party tools need over the course of multiple releases.  I
>> think the chances of that working out are just about zero.  Sure, you
>> can make the initial set of permissions granted to pg_backup match
>> exactly what pgbackrest needs, but it's a good bet that one of the six
>> or eight widely-used backup tools uses something that's not included
>> in that set.
>
> There may be something I've missed with the proposed pg_backup role, but
> I don't believe you're correct that there isn't a set of privileges
> which all of those backup tools need and which could be provided through
> the pg_backup role.

Well, there's certainly some set of privileges that will make them all
work.  But it might include more than some of them want.  If you done
any analysis of this, I have not seen it posted to this thread.

>> And even if not, it doesn't require very much
>> imagination to suppose that some tool, maybe pgbackrest or maybe
>> something else, that comes along over the next few releases will
>> require some extra permission.  When that happens, will we include add
>> that permission to the pg_backup role, or not?
>
> As I pointed out previously, we've already been doing this with the
> replication role attribute and I don't recall any complaining about it.

1. This doesn't really seem like the same thing.  You're introducing
something quite new here: these are not role attributes that apply
only to the role itself, but inheritable role attributes.

2. I believe that the discussion about what the replication role
should and should not allow involved considerably more people than
have discussed any of the specific roles you propose to add here.

3. It was clear from the outset that the replication role's basic
purpose was to be sufficient privilege for a streaming standby and no
more.  The remit of these roles is a lot less clear, at least to me.

> I wasn't suggesting that we give everyone the same privileges to some
> default 'pgAdmin' role but rather that providing an abstraction of the
> set of privileges possible against the catalog objects, into sets that
> make sense together, is a useful simplification for users and that it'd
> be a better approach than asking users to figure out what sets make
> sense on their own.

I have no objection to that *in theory*.  What's not clear to me is
that the way that you have broken it up actually meets the bona fide
needs of actual tools in a useful way.

> Adding pg_dump dump and restore support for catalog ACLs implies that
> we're supporting ACLs on all catalog objects- including tables.

Not to me it doesn't.  I think we could support it just for functions,
and have it continue to be as weird as it is currently for other types
of objects until somebody gets around to straightening that out.  If
we want to support it for more object types, great, but I don't think
that's a hard requirement.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Jan 4, 2016 at 12:55 PM, Stephen Frost  wrote:
> > I'd like to be able to include, in both of those, a simple set of
> > instructions for granting the necessary rights to the user who is
> > running those processes.  A set of rights which an administrator can go
> > look up and easily read and understand the result of those grants.  For
> > example:
> >
> > check_postgres:
> >
> >   Most check_postgres sub-commands can be run without superuser
> >   privileges by granting the pg_monitor role to the monitoring user:
> >
> >   GRANT pg_monitor TO monitor;
> >
> >   For information regarding the pg_monitor role, see:
> >   http://www.postgresql.org/docs/current/static/roles/database-roles.html
> >
> > pgbackrest:
> >
> >   To run pgbackrest as a non-superuser and not the 'postgres' system
> >   user, grant the pg_backup role to the backrest user and ensure the
> >   backrest system user has read access to the database files (eg: by
> >   having the system user be a member of the 'postgres' group):
> >
> >   GRANT pg_backup to backrest;
> >
> >   For information regarding the pg_backup role, see:
> >   http://www.postgresql.org/docs/current/static/roles/database-roles.html
> 
> So I have two comments on this.
> 
> First, it's not really going to matter to users very much whether the
> command to enable one of these features is a single GRANT command or a
> short sequence of GRANT commands executed one after another.  So even
> if we don't have roles that bundle together multiple permissions, you
> will still be able to do this; you just might need more than one
> command.  Probably, I'm guessing, not very many commands, but more
> than one.

I disagree.  I find that it does make a difference to users to be using
a well documented and clear approach over one which involves fiddling
with the individual pieces to get everything to work, and if a new
function comes along that is useful for backup users, that would have to
also be granted, even if it would clearly be useful to a backup role.

> Second, I think it's really unlikely that you're going to maintain
> perfect alignment between your predefined roles and the permissions
> that third-party tools need over the course of multiple releases.  I
> think the chances of that working out are just about zero.  Sure, you
> can make the initial set of permissions granted to pg_backup match
> exactly what pgbackrest needs, but it's a good bet that one of the six
> or eight widely-used backup tools uses something that's not included
> in that set.

There may be something I've missed with the proposed pg_backup role, but
I don't believe you're correct that there isn't a set of privileges
which all of those backup tools need and which could be provided through
the pg_backup role.

> And even if not, it doesn't require very much
> imagination to suppose that some tool, maybe pgbackrest or maybe
> something else, that comes along over the next few releases will
> require some extra permission.  When that happens, will we include add
> that permission to the pg_backup role, or not?  If we do, then we'll
> be giving excess permissions to all the other backup tools that don't
> need that new right, and maybe surprising users who upgrade without
> realizing that some of their roles now have new rights.  If we don't,
> then that tool won't be able to work without some additional
> twiddling.  I just find it incredibly unlikely that every monitoring
> tool, or every backup tool, or every admin tool will require exactly
> the same set of permissions.

As I pointed out previously, we've already been doing this with the
replication role attribute and I don't recall any complaining about it.

This discussion started out with the idea of adding more role
attributes to address this need to break out superuser rights, as we
have done in the past, and then moved to discussing default roles
instead because it's a better solution for abstracting permissions as
roles are more easily grantable, delegation of role granting can be
done, and role membership works with inheritance.

The arguments you raise here would apply nearly the same to the
replication role attribute, but in practice, we don't seem to have any
question regarding how that's handled, nor have we gotten complaints
about it.  I expect the same would be true with default roles and don't
see any particular reason to fear otherwise.

> > I can see similar bits of documentation being included in pgAdmin or
> > other tools.
> 
> ...and pgAdmin is a particularly good example.  The permissions that
> pgAdmin requires depend on what you want to do with it, and it does a
> lot of things, and it might do more or fewer things in the future.
> You can't even fairly assume that everyone wants to give the same
> permissions to pgAdmin, let alone that some competing tool will
> require the same set.

I wasn't suggesting that we give everyone the same privileges to 

Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 5:22 PM, Stephen Frost  wrote:
>> So, is this another case where the support is all in off-list fora and
>> thus invisible, or can you point to specific on-list discussions where
>> it was supported, and to the opinions offered in support?  I don't
>> really remember many opinions that were any more positive than "I
>> wouldn't be strongly opposed to this" or "If we're going to do this
>> then we ought to do it in X way".  I'm happy to be corrected if I'm
>> misrepresenting the record, but I'd characterize the overall reaction
>> to this proposal as tepid: nobody hated it, but nobody really loved it
>> either, and a bunch of mild concerns were offered.
>
> I agree that this has largely been the on-list reaction.  To be fair,
> it's been largely the off-list reaction also, which I've expressly
> tried to seek out, as mentioned above.  I'm not asking anyone to love
> it, I'm not entirely convinced it's lovable myself, but I do feel it's
> useful and worth making an effort for.

I think the question of whether the specific proposals on the table
are in fact useful is one that deserves more study.  I am not
convinced of that.  I believe something like this could be useful, but
I don't see a lot of evidence that the particular roles you're arguing
for actually are.

> I'd love to have folks from other companies involved in these
> discussions.  I'll even reach out explicitly to seek their comment, as
> I've done with other hackers at conferences, and try to get them to
> voice their opinions here.

Great, thanks.

>> What really bothers me about this thread is that these predefined
>> roles are intended to be useful for third-party tools, but the people
>> who maintain those third-party tools have said basically nothing.
>
> For my 2c, I believe that to be, by-and-large, because they don't want
> to get their hopes up until they see something actually get committed.
> Following long and deep threads such as these are quite a committment.

Yep.

>> I
>> don't recall, for example, Dave Page weighing in on what pgAdmin
>> needs, or anybody commenting on to what degree any of these proposals
>> would meet the needs of Slony or pgBouncer or pgPool or any backup
>> tool (other than perhaps pgbackrest, which I assume your proposals
>> cater to) or any monitoring tool.  Like, we've heard zip.  Either
>> those people don't know this thread exists, or they can't understand
>> it, or they think it's so boring that they can't be bothered to write
>> in and say whether this is useful or not.  I'd have a lot more
>> confidence that we are making a good decision if some of those people
>> would show up and say "I have reviewed this proposal and it looks {
>> great | terrible | mediocre } for $TOOL because $REASON".
>
> We *have* heard complaints from people, multiple times on various lists,
> that they'd like to set up check_postgres, Nagios, $MONITORINGTOOL, with
> a role that *isn't* a superuser.

True.  But we should verify that this proposal actually meets those
needs, not just assume it does.

> I'll ask Greg S-M if he would have
> time to weigh in on this though, check_postgres was specifically one of
> the tools which I was looking at when considering the pg_monitor role.

OK, that sounds like a good idea.

> I'm not sure about the references you use above to Slony or pgBouncer or
> pgPool as those aren't backup tools, to my mind..  I would expect barman
> and other backup tools to also use pg_start/stop_backup and
> pg_switch_xlog.  I'm not sure that there's a way to cater to one backup
> role when it comes to how filesystem-level backups are handled in PG,
> but perhaps I've missed something there that barman uses and which isn't
> included currently.

Oh, sure: they are not backup tools specifically.  But anything that
might need elevated privileges deserves consideration here: what sort
of subdivision of the superuser role would make that need go away?

> Of course, my reviewing barman or other tools wouldn't have the same
> support as Simon weighing in, so I'll try and pursue that avenue as
> well.

Cool.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Beginner hacker item: Fix to_reg*() input type

2016-01-04 Thread Petr Korobeinikov
> - Modify the functions in regproc.c. Take a look at how other text input
> functions work to see what needs to happen here (you'll want to use
> text_to_cstring() as part of that.)
>
> - Modify the appropriate entries in src/include/catalog/pg_proc.h

Let me try.
`make check` says "All 160 tests passed.".
diff --git a/src/backend/utils/adt/regproc.c b/src/backend/utils/adt/regproc.c
index 0f17eb8..eed1279 100644
--- a/src/backend/utils/adt/regproc.c
+++ b/src/backend/utils/adt/regproc.c
@@ -161,7 +161,7 @@ regprocin(PG_FUNCTION_ARGS)
 Datum
 to_regproc(PG_FUNCTION_ARGS)
 {
-   char   *pro_name = PG_GETARG_CSTRING(0);
+   const char *pro_name = text_to_cstring(PG_GETARG_TEXT_P(0));
List   *names;
FuncCandidateList clist;
 
@@ -331,7 +331,7 @@ regprocedurein(PG_FUNCTION_ARGS)
 Datum
 to_regprocedure(PG_FUNCTION_ARGS)
 {
-   char   *pro_name = PG_GETARG_CSTRING(0);
+   const char *pro_name = text_to_cstring(PG_GETARG_TEXT_P(0));
List   *names;
int nargs;
Oid argtypes[FUNC_MAX_ARGS];
@@ -620,7 +620,7 @@ regoperin(PG_FUNCTION_ARGS)
 Datum
 to_regoper(PG_FUNCTION_ARGS)
 {
-   char   *opr_name = PG_GETARG_CSTRING(0);
+   const char *opr_name = text_to_cstring(PG_GETARG_TEXT_P(0));
List   *names;
FuncCandidateList clist;
 
@@ -797,7 +797,7 @@ regoperatorin(PG_FUNCTION_ARGS)
 Datum
 to_regoperator(PG_FUNCTION_ARGS)
 {
-   char   *opr_name_or_oid = PG_GETARG_CSTRING(0);
+   const char *opr_name_or_oid = 
text_to_cstring(PG_GETARG_TEXT_P(0));
Oid result;
List   *names;
int nargs;
@@ -1061,7 +1061,7 @@ regclassin(PG_FUNCTION_ARGS)
 Datum
 to_regclass(PG_FUNCTION_ARGS)
 {
-   char   *class_name = PG_GETARG_CSTRING(0);
+   const char *class_name = text_to_cstring(PG_GETARG_TEXT_P(0));
Oid result;
List   *names;
 
@@ -1249,7 +1249,7 @@ regtypein(PG_FUNCTION_ARGS)
 Datum
 to_regtype(PG_FUNCTION_ARGS)
 {
-   char   *typ_name = PG_GETARG_CSTRING(0);
+   const char *typ_name = text_to_cstring(PG_GETARG_TEXT_P(0));
Oid result;
int32   typmod;
 
@@ -1606,7 +1606,7 @@ regrolein(PG_FUNCTION_ARGS)
 Datum
 to_regrole(PG_FUNCTION_ARGS)
 {
-   char   *role_name = PG_GETARG_CSTRING(0);
+   const char *role_name = text_to_cstring(PG_GETARG_TEXT_P(0));
Oid result;
List   *names;
 
@@ -1727,7 +1727,7 @@ regnamespacein(PG_FUNCTION_ARGS)
 Datum
 to_regnamespace(PG_FUNCTION_ARGS)
 {
-   char   *nsp_name = PG_GETARG_CSTRING(0);
+   const char *nsp_name = text_to_cstring(PG_GETARG_TEXT_P(0));
Oid result;
List   *names;
 
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index e5d6c77..22d9386 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -177,9 +177,9 @@ DATA(insert OID =  44 (  regprocin PGNSP PGUID 
12 1 0 0 0 f f f f t f s s 1
 DESCR("I/O");
 DATA(insert OID =  45 (  regprocout   PGNSP PGUID 12 1 0 0 0 f f f 
f t f s s 1 0 2275 "24" _null_ _null_ _null_ _null_ _null_ regprocout _null_ 
_null_ _null_ ));
 DESCR("I/O");
-DATA(insert OID = 3494 (  to_regproc   PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 24 "2275" _null_ _null_ _null_ _null_ _null_ to_regproc _null_ 
_null_ _null_ ));
+DATA(insert OID = 3494 (  to_regproc   PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 24 "25" _null_ _null_ _null_ _null_ _null_ to_regproc _null_ _null_ 
_null_ ));
 DESCR("convert proname to regproc");
-DATA(insert OID = 3479 (  to_regprocedure  PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 2202 "2275" _null_ _null_ _null_ _null_ _null_ to_regprocedure 
_null_ _null_ _null_ ));
+DATA(insert OID = 3479 (  to_regprocedure  PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 2202 "25" _null_ _null_ _null_ _null_ _null_ to_regprocedure _null_ 
_null_ _null_ ));
 DESCR("convert proname to regprocedure");
 DATA(insert OID =  46 (  textin   PGNSP PGUID 12 1 0 0 
0 f f f f t f i s 1 0 25 "2275" _null_ _null_ _null_ _null_ _null_ textin 
_null_ _null_ _null_ ));
 DESCR("I/O");
@@ -3483,9 +3483,9 @@ DATA(insert OID = 2214 (  regoperin   
PGNSP PGUID 12 1 0 0 0 f f f f t f s s 1 0
 DESCR("I/O");
 DATA(insert OID = 2215 (  regoperout   PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 2275 "2203" _null_ _null_ _null_ _null_ _null_ regoperout _null_ 
_null_ _null_ ));
 DESCR("I/O");
-DATA(insert OID = 3492 (  to_regoper   PGNSP PGUID 12 1 0 0 0 f f f f 
t f s s 1 0 2203 "2275" _null_ _null_ _null_ _null_ _null_ to_regoper _null_ 
_null_ _null_ ));
+DATA(insert OID = 3492 (  to_regoper   PGNSP PGUID 12 1 0 0 0 f f f f 
t f s 

Re: [HACKERS] parallel joins, and better parallel explain

2016-01-04 Thread Dilip Kumar
On Tue, Jan 5, 2016 at 1:52 AM, Robert Haas  wrote:

> On Mon, Jan 4, 2016 at 4:50 AM, Dilip Kumar  wrote:
> > I tried to create a inner table such that, inner table data don't fit in
> RAM
> > (I created VM with 1GB Ram).
> > Purpose of this is to make Disk scan dominant,
> > and since parallel join is repeating the Disk Scan and hash table
> building
> > of inner table, so there will be lot of Parallel I/O and it has to pay
> > penalty.
> >
> > I think even though, inner table scanning and hash table building is
> > parallel, but there will be lot of parallel I/O which will become
> > bottleneck.
>
> Hmm.  Because only 1/1024th of the hash table fits in work_mem, the
> executor is going to have to write out all of the tuples that don't
> belong to the first batch to a temporary file and then read them back
> in.  So each backend is going to write essentially the entirety of t2
> out to disk and then read it all back in again. The non-parallel case
> will also write most of the table contents and then read them back in,
> but at least it will only be doing that once rather than 7 times, so
> it's not as bad.  Also, with fewer backends running, the non-parallel
> case will have a bit more memory free for caching purposes.
>
> > Do we need to consider the cost for parallel i/o also, i am not sure can
> we
> > really do that... ?
>
> It seems to me that the problem here is that you've set
> max_parallel_degree to an unrealistically high value.  The query
> planner is entitled to assume that the user has set
> max_parallel_degree to a value which is small enough that the workers
> won't be fighting too viciously with each other over resources.  It
> doesn't really matter whether those resources are CPU resources or I/O
> resources.  I'm wondering if your 1GB VM really even has as many as 7
> vCPUs, because that would seem to be something of an unusual
> configuration - and if it doesn't, then setting max_parallel_degree to
> a value that high is certainly user error. Even if it does, it's still
> not right to set the value as high as six unless the system also has
> enough I/O bandwidth to accommodate the amount of I/O that you expect
> your queries to generate, and here it seems like it probably doesn't.
>
> To put that another way, you can always make parallel query perform
> badly by telling it to use too many workers relative to the size of
> the machine you have. This is no different than getting bad query
> plans by configuring work_mem or effective_cache_size or any other
> query planner GUC to a value that doesn't reflect the actual execution
> environment.  I would only consider this to be a problem with the
> parallel join patch if the chosen plan is slower even on a machine
> that's big enough to justify setting max_parallel_degree=6 in the
> first place.
>
>
Yes, it's valid point... I have configured 6Processor for the virtual
machine but that will be with HT.
 So this time i have configured 8 processor and taken performance again
with less number of parallel degree.

Even though with less paralllel degree there is some regression, but still
as you mentioned there can be some other limitation like i am configuring
Disk of 50GB and filling 20GB with data.

I think you are right, before coming to any conclusion, we need to test on
really high end machine where machine itself don't have any resource
constraint.

In 1GB RAM
8Processor VM ( Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz) --> This machine
i7, so i doubt it's really using 8 cores, so i tested with less parallel
degree.
SSD: 50GB


postgres=# set max_parallel_degree=3;
postgres=# explain analyze SELECT count(*) FROM t1 JOIN t2 ON t1.c1 = t2.c1
AND t2.c2 + t1.c1 > 100;
   QUERY
PLAN

 Aggregate  (cost=7920946.47..7920946.48 rows=1 width=0) (actual
time=162329.829..162329.829 rows=1 loops=1)
   ->  Gather  (cost=1527963.25..7880946.39 rows=1633 width=0) (actual
time=58233.106..159140.629 rows=4750 loops=1)
 Number of Workers: 3
 ->  Hash Join  (cost=1526963.25..6279943.09 rows=1633 width=0)
(actual time=58346.087..144309.987 rows=1188 loops=4)
   Hash Cond: (t1.c1 = t2.c1)
   Join Filter: ((t2.c2 + t1.c1) > 100)
   Rows Removed by Join Filter: 12
   ->  Parallel Seq Scan on t1  (cost=0.00..2064959.01
rows=32259701 width=4) (actual time=98.514..27003.566 rows=2500 loops=4)
   ->  Hash  (cost=739461.00..739461.00 rows=48000100 width=8)
(actual time=58012.228..58012.228 rows=4800 loops=4)
 Buckets: 131072  Batches: 1024  Memory Usage: 2856kB
 ->  Seq Scan on t2  (cost=0.0po0..739461.00
rows=48000100 width=8) (actual time=3.524..9634.181 rows=4800 loops=4)
 

Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Amit Langote
On 2016/01/05 3:53, Atri Sharma wrote:
> I was wary to use SPI inside the executor for node evaluation functions.
> Does it seem safe?

What is "node evaluation functions"? Is it "Plan" nodes or "Expr" nodes
that you are talking about? I guess you'd know to use ExecProcNode() or
ExecEvalExpr() for them, respectively.

Thanks,
Amit





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2016-01-04 Thread Tom Lane
Michael Paquier  writes:
> So, here are the commands that still remain with TailMatches to cover
> this case, per gram.y:
> - CREATE TABLE
> - CREATE INDEX
> - CREATE VIEW
> - GRANT
> - CREATE TRIGGER
> - CREATE SEQUENCE
> New patches are attached.

I've reviewed and committed the first of these patches.  I found a few
mistakes, mostly where you'd converted TailMatches to Matches without
adding the leading words that the original author had left out of the
pattern.  But man, this is mind-numbingly tedious work :-(.  I probably
made a few more mistakes myself.  Still, the code is enormously more
readable now than when we started, and almost certainly more reliable.

I'm too burned out to look at the second patch tonight, but hopefully
will get to it tomorrow.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] remove wal_level archive

2016-01-04 Thread Michael Paquier
On Tue, Jan 5, 2016 at 1:04 AM, Alvaro Herrera  wrote:
> Peter Eisentraut wrote:
>> So we've had several rounds of discussions about simplifying replication
>> configuration in general and the wal_level setting in particular. [0][1]
>>  Let's get something going.
>
> I looked at this patch, which I think has got enough consensus that you
> should just push forward with the proposed design -- in particular, just
> remove one of archive or hot_standby values, not keep it as a synonym of
> the other.  If we're counting votes, I prefer keeping hot_standby over
> archive.

FWIW I have advocated for the simple removal of 'archive' :)

> The patch is nicely compact and applies, with only some fuzz.
>
> I agree with changing all parts that say "XYZ or higher" to enumerate
> the possible values.

Yep.

> It may be a good idea to have a look at Michael Paquier's recovery test
> framework ( also in this commitfest: https://commitfest.postgresql.org/8/438/ 
> )
> and see how that is affected by this patch.  Maybe the tests can find a
> problem in this patch, and so perhaps you'd like to commit the tests
> first, then this change on top.

Those would need a rebase if this patch stays as is. I'll take actions
as needed.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] comment typo in RewindTest.pm

2016-01-04 Thread Masahiko Sawada
Hi all,

Attached patch fixes the comment typo in RewindTest.pm
Please find it.

Regards,

--
Masahiko Sawada
diff --git a/src/bin/pg_rewind/RewindTest.pm b/src/bin/pg_rewind/RewindTest.pm
index c1c7d1f..41d294e 100644
--- a/src/bin/pg_rewind/RewindTest.pm
+++ b/src/bin/pg_rewind/RewindTest.pm
@@ -183,7 +183,7 @@ sub promote_standby
 	  or die "Timed out while waiting for promotion of standby";
 
 	# Force a checkpoint after the promotion. pg_rewind looks at the control
-	# file todetermine what timeline the server is on, and that isn't updated
+	# file to determine what timeline the server is on, and that isn't updated
 	# immediately at promotion, but only at the next checkpoint. When running
 	# pg_rewind in remote mode, it's possible that we complete the test steps
 	# after promotion so quickly that when pg_rewind runs, the standby has not

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword classifications

2016-01-04 Thread Alvaro Herrera
Tom Lane wrote:

> But some experimentation suggests that we could
> fix that by subdividing col_name_keyword into two categories, one being
> the keywords that can also be type names (BIGINT, BIT, etc) and one
> being those that can't (BETWEEN, COALESCE, etc).  Everywhere
> col_name_keyword is used now, just mention both sub-categories.
> And in def_arg, add a production for only the second sub-category.
> 
> I think such a categorization would actually be cleaner than what we have
> now; if you read the comments for col_name_keyword, they're pretty squishy
> about whether these keywords can be type or function names or not, and
> this subdivision would make that a little clearer.  Interestingly, it
> also seems that the grammar tables become slightly smaller, suggesting
> that Bison also finds this more regular.

Sounds reasonable.  We already have four categories, so one more
shouldn't be a problem, and if Bison likes it then all the better.

So you're talking about this:

 * Many of these keywords will in fact be recognized as type or function
 * names too; but they have special productions for the purpose, and so
 * can't be treated as "generic" type or function names.

and you're saying that this comment will be moved to the first of these
new categories and s/Many of these/These/.  In other words, the "special
productions" will remain, which this is the stuff under SimpleTypename,
minus GenericType.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] \x auto and EXPLAIN

2016-01-04 Thread Andreas Karlsson

On 01/03/2016 06:43 PM, Tom Lane wrote:

I see two ways to fix this.



1) Never use expanded display for the case where there is only one
column. There seems to me like there is little value in using expanded
display for when you only have one column, but I may be missing some use
case here.



2) Explicitly detect (for example based on the headers) that the result
is a query plan and if so disable expanded display.


The second of these seems pretty bletcherous --- for one thing, it might
fall foul of localization attempts.  However, I could see the argument
for not using expanded mode for any single-column output.


I too prefer the first option, and hope I have not missed a case for 
when having "\x auto" give expanded display with a single column is a 
clear gain. For the cases I have seen myself it has always been a 
usability loss (minor or great).


Andreas



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread Rushabh Lathia
On Mon, Jan 4, 2016 at 4:41 PM, amul sul  wrote:

> Hi,
>
> In inheritance, child column's pg_attribute.attislocal flag not getting
> updated, if it is inherited using ALTER TABLE  INHERIT .
>
> Due to this, if we try to drop column(s) from parent table, which are not
> getting drop from child.
> Attached herewith is quick patch fixing this issue.
>
>
> --Demonstration:
> --
> CREATE TABLE p1 (a int , b int, c int, d int);
>
> CREATE TABLE c1 () inherits (p1);CREATE TABLE c2 (a int , b int, c int, d
> int);
>
>
> --Drop parent's column
> ALTER TABLE p1 DROP COLUMN b;
> ALTER TABLE p1 DROP COLUMN c;
> ALTER TABLE p1 DROP COLUMN d;
>
>
> postgres=# \d p1
>   Table "public.p1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Number of child tables: 2 (Use \d+ to list them.)
>
> postgres=# \d c1
>   Table "public.c1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Inherits: p1
>
> postgres=# \d c2
>   Table "public.c2"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
>  b  | integer |
>  c  | integer |
>  d  | integer |
> Inherits: p1
>
>
> --
>

Seems like you missed following command in the demonstration test:

ALTER TABLE c2 INHERIT p1;


> You can see columns are not dropped from child c2 table, which we have
> inherited using ALTER command.
>


I took a quick look at this and did some testing. Patch looks good to me.
ALTER TABLE INHERIT missing the attislocal = false check.


> Regards,
> Amul Sul
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 
Rushabh Lathia
www.EnterpriseDB.com


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Tom Lane
Michael Paquier  writes:
> The patch would put the buildfarm in red as it is incomplete anyway,
> with MSVC what is used instead of dynloader.h is
> port/dynloader/win32.h. Instead of this patch I would be incline to
> remove the #define stuff with dynloader.h that use WIN32_ONLY_COMPILER
> (see for example dfmgr.c) and just copy the header in include/. This
> way we use the same header for all platforms.

Patch please?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Michael Paquier
On Tue, Jan 5, 2016 at 2:27 PM, Tom Lane  wrote:
> Michael Paquier  writes:
>> The patch would put the buildfarm in red as it is incomplete anyway,
>> with MSVC what is used instead of dynloader.h is
>> port/dynloader/win32.h. Instead of this patch I would be incline to
>> remove the #define stuff with dynloader.h that use WIN32_ONLY_COMPILER
>> (see for example dfmgr.c) and just copy the header in include/. This
>> way we use the same header for all platforms.
>
> Patch please?

Sure, here you go. Bruce's patch simply forgot to copy the header file
via Solution.pm, so installation just failed. The change of dfmgr.c is
actually not mandatory but I think that as long as dynloader.h is
copied in include/ we had better change that as well, and it makes the
code cleaner.
-- 
Michael
diff --git a/src/backend/utils/fmgr/dfmgr.c b/src/backend/utils/fmgr/dfmgr.c
index 7edfe97..f41035d 100644
--- a/src/backend/utils/fmgr/dfmgr.c
+++ b/src/backend/utils/fmgr/dfmgr.c
@@ -16,11 +16,7 @@
 
 #include 
 
-#ifndef WIN32_ONLY_COMPILER
 #include "dynloader.h"
-#else
-#include "port/dynloader/win32.h"
-#endif
 #include "lib/stringinfo.h"
 #include "miscadmin.h"
 #include "storage/shmem.h"
diff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm
index 40e06f6..ea9b857 100644
--- a/src/tools/msvc/Install.pm
+++ b/src/tools/msvc/Install.pm
@@ -582,7 +582,7 @@ sub CopyIncludeFiles
 		'Public headers', $target . '/include/',
 		'src/include/',   'postgres_ext.h',
 		'pg_config.h','pg_config_ext.h',
-		'pg_config_os.h', 'pg_config_manual.h');
+		'pg_config_os.h', 'dynloader.h', 'pg_config_manual.h');
 	lcopy('src/include/libpq/libpq-fs.h', $target . '/include/libpq/')
 	  || croak 'Could not copy libpq-fs.h';
 
@@ -605,7 +605,8 @@ sub CopyIncludeFiles
 	CopyFiles(
 		'Server headers',
 		$target . '/include/server/',
-		'src/include/', 'pg_config.h', 'pg_config_ext.h', 'pg_config_os.h');
+		'src/include/', 'pg_config.h', 'pg_config_ext.h', 'pg_config_os.h',
+		'dynloader.h');
 	CopyFiles(
 		'Grammar header',
 		$target . '/include/server/parser/',
diff --git a/src/tools/msvc/Solution.pm b/src/tools/msvc/Solution.pm
index 1564b72..ac116b7 100644
--- a/src/tools/msvc/Solution.pm
+++ b/src/tools/msvc/Solution.pm
@@ -301,6 +301,14 @@ s{PG_VERSION_STR "[^"]+"}{__STRINGIFY(x) #x\n#define __STRINGIFY2(z) __STRINGIFY
 			'src/include/storage/lwlocknames.h');
 	}
 
+	if (IsNewer(
+			'src/include/dynloader.h',
+			'src/backend/port/dynloader/win32.h'))
+	{
+		copyFile('src/backend/port/dynloader/win32.h',
+			'src/include/dynloader.h');
+	}
+
 	if (IsNewer('src/include/utils/probes.h', 'src/backend/utils/probes.d'))
 	{
 		print "Generating probes.h...\n";
diff --git a/src/tools/msvc/clean.bat b/src/tools/msvc/clean.bat
index e3da6aa..feb0fe5 100755
--- a/src/tools/msvc/clean.bat
+++ b/src/tools/msvc/clean.bat
@@ -38,6 +38,7 @@ REM Delete files created with GenerateFiles() in Solution.pm
 if exist src\include\pg_config.h del /q src\include\pg_config.h
 if exist src\include\pg_config_ext.h del /q src\include\pg_config_ext.h
 if exist src\include\pg_config_os.h del /q src\include\pg_config_os.h
+if exist src\include\dynloader.h del /q src\include\dynloader.h
 if %DIST%==1 if exist src\backend\parser\gram.h del /q src\backend\parser\gram.h
 if exist src\include\utils\errcodes.h del /q src\include\utils\errcodes.h
 if exist src\include\utils\fmgroids.h del /q src\include\utils\fmgroids.h

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Amit Langote
On 2016/01/05 14:30, Atri Sharma wrote:
> On Tue, Jan 5, 2016 at 9:54 AM, Amit Langote 
>> On 2016/01/05 3:53, Atri Sharma wrote:
>>> I was wary to use SPI inside the executor for node evaluation functions.
>>> Does it seem safe?
>>
>> What is "node evaluation functions"? Is it "Plan" nodes or "Expr" nodes
>> that you are talking about? I guess you'd know to use ExecProcNode() or
>> ExecEvalExpr() for them, respectively.
>>
> I fail to see the relevance of which node is getting evaluated (its a Plan
> node BTW) for this question. The concern I had was around using SPI inside
> executor and its fail safety.

Sorry, I may have misunderstood your question(s). Seeing your first
question in the thread, I see that you're looking to query non-system
tables within the executor. AFAIU, most of the processing within executor
takes the form of some node in some execution pipeline of a plan tree.
Perhaps, you're imagining some kind of node, subnode or some such. By the
way, some places like ATRewriteTable(), validateCheckConstraint() scan
user tables directly using low-level utilities within a dummy executor
context. I think Jim suggested something like that upthread.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2016-01-04 Thread Michael Paquier
On Tue, Jan 5, 2016 at 10:13 AM, Tom Lane  wrote:
> Michael Paquier  writes:
>> So, here are the commands that still remain with TailMatches to cover
>> this case, per gram.y:
>> - CREATE TABLE
>> - CREATE INDEX
>> - CREATE VIEW
>> - GRANT
>> - CREATE TRIGGER
>> - CREATE SEQUENCE
>> New patches are attached.
>
> I've reviewed and committed the first of these patches.  I found a few
> mistakes, mostly where you'd converted TailMatches to Matches without
> adding the leading words that the original author had left out of the
> pattern.  But man, this is mind-numbingly tedious work :-(.  I probably
> made a few more mistakes myself.  Still, the code is enormously more
> readable now than when we started, and almost certainly more reliable.

Thanks. My best advice is to avoid doing such after 10PM, that's a
good way to finish with a headache. I did it.

> I'm too burned out to look at the second patch tonight, but hopefully
> will get to it tomorrow.

I see that you are on fire these days, still bodies need some rest.
This second one still applies cleanly, but it is far less urgent, the
other psql-tab patches do not depend on it.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Chapman Flack
On 01/05/16 00:18, Michael Paquier wrote:

> with MSVC what is used instead of dynloader.h is
> port/dynloader/win32.h.

Seems like a good catch - AFAICS, what happens with port/dynloader
is that for 12 different OSes, there's an .h file there to
be copied _renamed to dynloader.h_ into the build tree, and a .c
file expecting similar treatment, and that the problem that kicked
off this whole thread was just that the windows build process (and
only the windows build process) was neglecting to do that.

And so I was pretty sure the fix was to make the windows build do
that, and then it would be doing the same thing as the other eleven,
but I just looked at Bruce's patch more closely and it does seem to
be doing something else instead.

> Instead of this patch I would be incline to
> remove the #define stuff with dynloader.h that use WIN32_ONLY_COMPILER
> (see for example dfmgr.c) and just copy the header in include/. This
> way we use the same header for all platforms.

But this part I'm not sure I follow (and maybe I don't need to follow
it, you guys know your code better than I do) ... in this whole thread
haven't we been talking about just making the windows build copy its
port/dynloader files the same way the other eleven platforms do, because
it wasn't, and that seemed to be an omission, and worth not continuing
to omit?

-Chap


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Michael Paquier
On Mon, Jan 4, 2016 at 9:37 PM, Chapman Flack  wrote:
> On 01/05/16 00:18, Michael Paquier wrote:
>
>> with MSVC what is used instead of dynloader.h is
>> port/dynloader/win32.h.
>
> Seems like a good catch - AFAICS, what happens with port/dynloader
> is that for 12 different OSes, there's an .h file there to
> be copied _renamed to dynloader.h_ into the build tree, and a .c
> file expecting similar treatment, and that the problem that kicked
> off this whole thread was just that the windows build process (and
> only the windows build process) was neglecting to do that.

Yep, via ./configure.

> And so I was pretty sure the fix was to make the windows build do
> that, and then it would be doing the same thing as the other eleven,
> but I just looked at Bruce's patch more closely and it does seem to
> be doing something else instead.

Bruce's patch was incomplete, it forgot to copy the header file to
include/ to installation actually failed. That's the equivalent of
configure, but for msvc.

>> Instead of this patch I would be incline to
>> remove the #define stuff with dynloader.h that use WIN32_ONLY_COMPILER
>> (see for example dfmgr.c) and just copy the header in include/. This
>> way we use the same header for all platforms.
>
> But this part I'm not sure I follow (and maybe I don't need to follow
> it, you guys know your code better than I do) ... in this whole thread
> haven't we been talking about just making the windows build copy its
> port/dynloader files the same way the other eleven platforms do, because
> it wasn't, and that seemed to be an omission, and worth not continuing
> to omit?

That's not a mandatory fix, but I think we had better do it. As long
as dynloader.h is copied in include/, there is no need to keep the
tweak of dfmgr.c to include the definitions those routines.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword classifications

2016-01-04 Thread Tom Lane
Michael Paquier  writes:
> On Sat, Jan 2, 2016 at 4:06 AM, Tom Lane  wrote:
>> The grammar fixes seem like a good thing to do in the long run, too,
>> but there's little need to risk back-patching it since accepting
>> col_name_keywords without quoting would be mostly a convenience issue.

> A different angle of attack is to flatten the argument quotes directly
> in reloptions.c:
> http://www.postgresql.org/message-id/CAB7nPqTpdGLqLTxuGhBC2GabGNiFRAtLjFbxu=agy1rx_dg...@mail.gmail.com
> But you did not like that :p

It seemed pretty messy.  There is nothing very wrong with the convention
that pg_class.reloptions is an array of "name=value" entries with both
name and value being taken literally.  The only thing that rule excludes
is that the option name cannot include an "=", which is a restriction that
bothers me not at all.

The dumped form of reloptions needs to meet the grammar restrictions on
what can be in WITH, but that's really a separate question.

The bug we had was that pg_dump and ruleutils.c weren't considering that
the rules might be different for the two representations.  Yeah, you could
fix it by insisting that the rules be identical, but I don't really find
that cleaner (and it could not be a back-patchable fix for existing
databases, anyway).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE behind-the-scenes effects' CONTEXT

2016-01-04 Thread Simon Riggs
On 4 January 2016 at 20:44, Alvaro Herrera  wrote:


> Maybe
> there are more ALTER TABLE subcommands that should be setting something
> up?  In cases where multiple subcommands are being run, it might be
> useful to see which one caused a certain error message.
>

I like the patch.

We should have a message for each subcommand, since there are many that run
for a long time and we support the optimization of allowing many
subcommands together at once.

There should also be a comment about making name a requirement for any
subcommand.


> I think some additional tests wouldn't hurt.
>

Each subcommand message should be generated at least once in tests.


> I await feedback from Simon Riggs, who set himself up as reviewer a
> couple of days ago.  Simon, do you also intend to be committer?  If so,
> please mark yourself as such in the commitfest app.
>

Happy to be the committer on this.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Michael Paquier
On Mon, Jan 4, 2016 at 10:06 AM, Bruce Momjian  wrote:
> On Mon, Jan  4, 2016 at 12:59:26PM -0500, Tom Lane wrote:
>> Bruce Momjian  writes:
>> > On Sun, Jan  3, 2016 at 12:35:02PM -0500, Tom Lane wrote:
>> >> If we're willing to allow 9.4.6 to install different files than 9.4.5
>> >> does, I don't see why it's a problem for 9.5.1.  But having said that,
>> >> I agree that this seems pretty low-risk, and so IMO we might as well
>> >> ship it sooner not later.
>>
>> > Well, as I said, I can't test the patch, which made me lean toward
>> > 9.5.1.
>>
>> That's what the buildfarm is for ... but ...
>>
>> I would have been fine with you pushing this yesterday, but now it's
>> too late to get a buildfarm cycle on it.  Please hold for 9.5.1.
>
> Oh, I forgot about the buildfarm testing.  Good point.  OK, hold for
> 9.5.1.

The patch would put the buildfarm in red as it is incomplete anyway,
with MSVC what is used instead of dynloader.h is
port/dynloader/win32.h. Instead of this patch I would be incline to
remove the #define stuff with dynloader.h that use WIN32_ONLY_COMPILER
(see for example dfmgr.c) and just copy the header in include/. This
way we use the same header for all platforms.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Tom Lane
Atri Sharma  writes:
> I fail to see the relevance of which node is getting evaluated (its a Plan
> node BTW) for this question. The concern I had was around using SPI inside
> executor and its fail safety.

The code path executor -> PL function -> SPI certainly works, so
presumably omitting the intermediate PL function could still work.
Whether it's a good idea is another question entirely.  I do not
offhand see a good reason why knowledge of non-system tables should
exist in the core executor; so what is the need to use SPI?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Accessing non catalog table in backend

2016-01-04 Thread Atri Sharma
On Tue, Jan 5, 2016 at 9:54 AM, Amit Langote 
wrote:

> On 2016/01/05 3:53, Atri Sharma wrote:
> > I was wary to use SPI inside the executor for node evaluation functions.
> > Does it seem safe?
>
> What is "node evaluation functions"? Is it "Plan" nodes or "Expr" nodes
> that you are talking about? I guess you'd know to use ExecProcNode() or
> ExecEvalExpr() for them, respectively.
>
> Thanks,
> Amit
>
>
>
I fail to see the relevance of which node is getting evaluated (its a Plan
node BTW) for this question. The concern I had was around using SPI inside
executor and its fail safety.

Regards,

Atri


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Andres Freund
On 2016-01-04 10:20:43 -0500, Tom Lane wrote:
> I'm slightly worried about breaking 3rd-party code that might be using
> recv() and somehow expecting the current behavior.  However, it's equally
> arguable that such code would have Windows-specific problems that would be
> fixed by the patch.

I don't really see how somebody could validly depend on the old
behaviour, given it always was windows only, and there only with
nonblocking sockets. Seems more likely to be an undiscovered bug for
such a hyptothetical user.

I also think there's rather few drivers / clients actually shutting down
sockets. Here it AFAICS really was a more or less unintentional side
effect.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Magnus Hagander
On Mon, Jan 4, 2016 at 4:20 PM, Tom Lane  wrote:

> Andres Freund  writes:
> > I wonder if we ought to backport this further: e.g. walsender
> > continously uses nonblocking sockets via pq_getbyte_if_available(). On
> > the other hand I can't immediately see a problem with that, besides
> > differing messages on windows/the rest of the world.
>
> I'm slightly worried about breaking 3rd-party code that might be using
> recv() and somehow expecting the current behavior.  However, it's equally
> arguable that such code would have Windows-specific problems that would be
> fixed by the patch.  Now that we've seen a successful round of buildfarm
> results, I'd be okay with back-patching 90e61df8 personally.
>
> Any other opinions out there?
>

Maybe holdoff until the release with the new code has been out for a while,
but make sure we get it into the next set of minors? That'll give us at
least some real world deployment to notice any issues with it?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Avoiding pin scan during btree vacuum

2016-01-04 Thread Tom Lane
Andres Freund  writes:
> On 2016-01-03 15:40:01 +, Simon Riggs wrote:
>> I'm happy with this being a simple patch now, not least because I would
>> like to backpatch this to 9.4 where catalog scans became MVCC.
>> 
>> A backpatch is warranted because it is a severe performance issue with
>> replication and we can fix that before 9.5 hits the streets.
>> 
>> I'll be doing some more testing and checking, so not in a rush.

> This seems like a might subtle thing to backpatch. If we really want to
> go there, ISTM that the relevant code should stew in an unreleased
> branch for a while, before being backpatched.

I'm definitely -1 on back-patching such a thing.  Put it in HEAD for
awhile.  If it survives six months or so then we could discuss it again.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Tom Lane
Magnus Hagander  writes:
> On Mon, Jan 4, 2016 at 4:20 PM, Tom Lane  wrote:
>> I'm slightly worried about breaking 3rd-party code that might be using
>> recv() and somehow expecting the current behavior.  However, it's equally
>> arguable that such code would have Windows-specific problems that would be
>> fixed by the patch.  Now that we've seen a successful round of buildfarm
>> results, I'd be okay with back-patching 90e61df8 personally.
>> 
>> Any other opinions out there?

> Maybe holdoff until the release with the new code has been out for a while,
> but make sure we get it into the next set of minors? That'll give us at
> least some real world deployment to notice any issues with it?

Well, we could push it now before we forget about it, and we'd still have
exactly that result, given that the next set of minors are scheduled for
early February.  If there is anything badly wrong we should hear about
it from early adopters of 9.5.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] comment typo in RewindTest.pm

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 8:08 AM, Masahiko Sawada  wrote:
> Attached patch fixes the comment typo in RewindTest.pm
> Please find it.

Committed and back-patched to 9.5.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Tom Lane
Robert Haas  writes:
> OK, well, if the consensus is in favor of a back-patch, so be it.  It
> seems a little strange to me to back-patch a commit that doesn't fix
> anything, but I just work here.

Well, it's true that we can't point to specific field reports and say
that this will fix those.  But it's not like our Windows port is so
rock-solid-reliable that we should give it the benefit of the doubt
about existing behaviors being correct.  We do know that the code path
in question is used in previous branches --- we put it there for a
reason --- and I think it's probably possible that it gets exercised
in corner cases, even pre-9.5.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread Amit Langote
Hi,

On Mon, Jan 4, 2016 at 8:11 PM, amul sul  wrote:
> Hi,
>
> In inheritance, child column's pg_attribute.attislocal flag not getting 
> updated, if it is inherited using ALTER TABLE  INHERIT .
>
> Due to this, if we try to drop column(s) from parent table, which are not 
> getting drop from child.
> Attached herewith is quick patch fixing this issue.
>
>
> --Demonstration:
> --
> CREATE TABLE p1 (a int , b int, c int, d int);
>
> CREATE TABLE c1 () inherits (p1);CREATE TABLE c2 (a int , b int, c int, d 
> int);
>
>
> --Drop parent's column
> ALTER TABLE p1 DROP COLUMN b;
> ALTER TABLE p1 DROP COLUMN c;
> ALTER TABLE p1 DROP COLUMN d;
>
>
> postgres=# \d p1
>   Table "public.p1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Number of child tables: 2 (Use \d+ to list them.)
>
> postgres=# \d c1
>   Table "public.c1"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
> Inherits: p1
>
> postgres=# \d c2
>   Table "public.c2"
>  Column |  Type   | Modifiers
> +-+---
>  a  | integer |
>  b  | integer |
>  c  | integer |
>  d  | integer |
> Inherits: p1
>
>
> --
> You can see columns are not dropped from child c2 table, which we have 
> inherited using ALTER command.

I'm afraid the patched behavior of MergeAttributeIntoExisting() would
be inconsistent with MergeAttributes(). For example, try with the
following:

CREATE TABLE c1(b int) INHERITS(p1);

In this case, MergeAttributes() would cause 'b' to be defined to be a
local attribute (ie, with attislocal = true) and hence would not be
dropped unlike c2 which the patched behavior would cause to be
dropped.

Am I missing something?

Thanks,
Amit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread Tom Lane
amul sul  writes:
> In inheritance, child column's pg_attribute.attislocal flag not getting 
> updated, if it is inherited using ALTER TABLE  INHERIT .

I think this patch is wrong and you have broken the intended behavior.
It's a bit hard to tell though because your example is confused.

> CREATE TABLE p1 (a int , b int, c int, d int);

> CREATE TABLE c1 () inherits (p1);CREATE TABLE c2 (a int , b int, c int, d 
> int);

> --Drop parent's column
> ALTER TABLE p1 DROP COLUMN b;
> ALTER TABLE p1 DROP COLUMN c;
> ALTER TABLE p1 DROP COLUMN d;

> postgres=# \d p1
>   Table "public.p1"
>  Column |  Type   | Modifiers 
> +-+---
>  a  | integer | 
> Number of child tables: 2 (Use \d+ to list them.)

Say what?  At this point only c1 is a child of p1.  I assume you've
left something out, either an ALTER INHERIT or an INHERITS clause
in CREATE TABLE c2.

Either way, however, the way you declared c2, it has an independent
local definition of all four columns, and so they should not go away
even if the parent's columns go away.  This is exactly the purpose
that attislocal was created to serve, and your patch destroys it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Avoiding pin scan during btree vacuum

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 10:30 AM, Tom Lane  wrote:
>> This seems like a might subtle thing to backpatch. If we really want to
>> go there, ISTM that the relevant code should stew in an unreleased
>> branch for a while, before being backpatched.
>
> I'm definitely -1 on back-patching such a thing.  Put it in HEAD for
> awhile.  If it survives six months or so then we could discuss it again.

I agree with Tom.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Tom Lane
Andres Freund  writes:
> On 2016-01-04 10:35:12 -0500, Robert Haas wrote:
>> If we don't know of a specific problem that would be fixed by
>> back-patching this commit to pre-9.5 branches, and it seems like we
>> don't, then I don't really see much upside to back-patching it.  I
>> mean, yeah, we think that this is wrong because we think we know that
>> the behavior of Windows is different than what we thought when the
>> code was written.  But if we were wrong then, we could be wrong now,
>> too.  If so, it would be better to only have broken 9.5.

> I think it always was just a typo, given code a few lines down in the
> same function, added by the same commit, treated that case differently.

And, indeed, it was only because that code further down handled the case
correctly that nobody noticed for so long.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgsql: Further tweaking of print_aligned_vertical().

2016-01-04 Thread Greg Stark
On Mon, Jan 4, 2016 at 6:24 AM, Michael Paquier
 wrote:
> Arg, actually it is not:
> https://wiki.postgresql.org/wiki/PostgreSQL_Conference_Europe_Talks_2015
> Greg could you add it there?

No, apparently that page is restricted to only some users.

I used plenty of images I pulled off the net without regard for
copyright so I hesitated to put it up. I suppose that's par for the
course with these kinds of presentations. In any case it was just a
quick lightning talk I threw together to describe a few days of mostly
waiting around for builds to finish. Here are the two lightning talks
on Google:

VAX (simh):
https://docs.google.com/presentation/d/1PG-bMU4WiS1pjtBwRvH4cE-nN9y5gj8ZLCO7KMrlvYg/view

Fuzzer:
https://docs.google.com/presentation/d/12Dd9Bhcugkdi2w0ye4T1fy9ccjconJEz9cNthdeyH7k/view

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 10:22 AM, Magnus Hagander  wrote:
> On Mon, Jan 4, 2016 at 4:20 PM, Tom Lane  wrote:
>> Andres Freund  writes:
>> > I wonder if we ought to backport this further: e.g. walsender
>> > continously uses nonblocking sockets via pq_getbyte_if_available(). On
>> > the other hand I can't immediately see a problem with that, besides
>> > differing messages on windows/the rest of the world.
>>
>> I'm slightly worried about breaking 3rd-party code that might be using
>> recv() and somehow expecting the current behavior.  However, it's equally
>> arguable that such code would have Windows-specific problems that would be
>> fixed by the patch.  Now that we've seen a successful round of buildfarm
>> results, I'd be okay with back-patching 90e61df8 personally.
>>
>> Any other opinions out there?
>
> Maybe holdoff until the release with the new code has been out for a while,
> but make sure we get it into the next set of minors? That'll give us at
> least some real world deployment to notice any issues with it?

If we don't know of a specific problem that would be fixed by
back-patching this commit to pre-9.5 branches, and it seems like we
don't, then I don't really see much upside to back-patching it.  I
mean, yeah, we think that this is wrong because we think we know that
the behavior of Windows is different than what we thought when the
code was written.  But if we were wrong then, we could be wrong now,
too.  If so, it would be better to only have broken 9.5.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Andres Freund
On 2016-01-04 10:35:12 -0500, Robert Haas wrote:
> If we don't know of a specific problem that would be fixed by
> back-patching this commit to pre-9.5 branches, and it seems like we
> don't, then I don't really see much upside to back-patching it.  I
> mean, yeah, we think that this is wrong because we think we know that
> the behavior of Windows is different than what we thought when the
> code was written.  But if we were wrong then, we could be wrong now,
> too.  If so, it would be better to only have broken 9.5.

I think it always was just a typo, given code a few lines down in the
same function, added by the same commit, treated that case differently.

Anyway, I don't have a particularly strong opinion here, so ...


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 10:49 AM, Tom Lane  wrote:
> Andres Freund  writes:
>> On 2016-01-04 10:35:12 -0500, Robert Haas wrote:
>>> If we don't know of a specific problem that would be fixed by
>>> back-patching this commit to pre-9.5 branches, and it seems like we
>>> don't, then I don't really see much upside to back-patching it.  I
>>> mean, yeah, we think that this is wrong because we think we know that
>>> the behavior of Windows is different than what we thought when the
>>> code was written.  But if we were wrong then, we could be wrong now,
>>> too.  If so, it would be better to only have broken 9.5.
>
>> I think it always was just a typo, given code a few lines down in the
>> same function, added by the same commit, treated that case differently.
>
> And, indeed, it was only because that code further down handled the case
> correctly that nobody noticed for so long.

OK, well, if the consensus is in favor of a back-patch, so be it.  It
seems a little strange to me to back-patch a commit that doesn't fix
anything, but I just work here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in MergeAttributesIntoExisting() function.

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 9:28 AM, Rushabh Lathia  wrote:
> On Mon, Jan 4, 2016 at 4:41 PM, amul sul  wrote:
>> Hi,
>> In inheritance, child column's pg_attribute.attislocal flag not getting
>> updated, if it is inherited using ALTER TABLE  INHERIT .
>>
>> Due to this, if we try to drop column(s) from parent table, which are not
>> getting drop from child.
>> Attached herewith is quick patch fixing this issue.
> Seems like you missed following command in the demonstration test:
>
> ALTER TABLE c2 INHERIT p1;
>
>>
>> You can see columns are not dropped from child c2 table, which we have
>> inherited using ALTER command.
>
> I took a quick look at this and did some testing. Patch looks good to me.
> ALTER TABLE INHERIT missing the attislocal = false check.

This is not a bug.  ALTER TABLE .. INHERIT isn't supposed to set
attislocal = false.  attislocal tracks whether there was a local copy
of the attribute definition originally - here, the answer is yes, even
after the ALTER TABLE .. INHERIT.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Refactoring of LWLock tranches

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 1:20 AM, Amit Kapila  wrote:
> On Mon, Jan 4, 2016 at 4:49 AM, Robert Haas  wrote:
>> On Thu, Dec 31, 2015 at 3:36 AM, Amit Kapila 
>> wrote:
>> > LWLock *LWLockAssignFromTranche(const char *tranche_name) will
>> > assign a lock for the specified tranche.  This also ensures that no
>> > more than requested number of LWLocks can be assigned from a
>> > specified tranche.
>>
>> However, this seems like an awkward API, because if there are many
>> LWLocks you're going to need to look up the tranche name many times,
>> and then you're going to need to make an array of pointers to them
>> somehow, introducing a thoroughly unnecessary layer of indirection.
>> Instead, I suggest just having a function LWLockPadded
>> *GetLWLockAddinTranche(const char *tranche_name) that returns a
>> pointer to the base of the array.
>
> If we do that way, then user of API needs to maintain the interlock
> guarantee that the requested number of locks is same as allocations
> done from that tranche and also if it is not allocating all the
> LWLocks in one function, it needs to save the base address of the
> array and then allocate from it by maintaining some counter.
> I agree that looking up for tranche names multiple times is not good,
> if there are many number of lwlocks, but that is done at startup
> time and not at operation-level.  Also if we look currently at
> the extensions in contrib, then just one of them allocactes one
> LWLock in this fashion, now there could be extnsions apart from
> extensions in contrib, but not sure if they require many number of
> LWLocks, so I feel it is better to give an API which is more
> convinient for user to use.

Well, I agree with you that we should provide the most convenient API
possible, but I don't agree that yours is more convenient than the one
I proposed.  I think it is less convenient.  In most cases, if the
user asks for a large number of LWLocks, they aren't going to be each
for a different purpose - they will all be for the same purpose, like
one per buffer or one per hash partition.  The case where the user
wants to allocate 8 lwlocks from an extension, each for a different
purpose, and spread those allocations across a bunch of different
functions probably does not exist.  *Maybe* there is somebody
allocating lwlocks from an extension for unrelated purposes, but I'd
be willing to bet that, if so, all of those allocations happen one
right after the other in a single function, because anything else
would be completely nuts.

>> > Also I have retained NUM_USER_DEFINED_LWLOCKS in
>> > MainLWLockArray so that if any extensions want to assign a LWLock
>> > after startup, it can be used from this pool.  The tranche for such
>> > locks
>> > will be reported as main.
>>
>> I don't like this.  I think we should get rid of
>> NUM_USER_DEFINED_LWLOCKS altogether.  It's just an accident waiting to
>> happen, and I don't think there are enough people using LWLocks from
>> extensions that we'll annoy very many people by breaking backward
>> compatibility here.  If we are going to care about backward
>> compatibility, then I think the old-style lwlocks should go in their
>> own tranche, not main.  But personally, I think it'd be better to just
>> force a hard break and make people switch to using the new APIs.
>
> One point to think before removing it altogether, is that the new API's
> provide a way to allocate LWLocks at the startup-time and if any user
> wants to allocate a new LWLock after start-up, it will not be possible
> with the proposed API's.  Do you think for such cases, we should ask
> user to use it the way we have exposed or do you have something else
> in mind?

Allocating a new lock after startup mostly doesn't work, because there
will be at most 3 available and sometimes less.  And even if it does
work, it often isn't very useful because you probably need some other
shared memory space as well - otherwise, what is the lock protecting?
And that space might not be available either.

I'd be interested in knowing whether there are cases where useful
extensions can be loaded apart from shared_preload_libraries because
of NUM_USER_DEFINED_LWLOCKS and our tendency to allocate a little bit
of extra shared memory, but my suspicion is that it rarely works out
and is too flaky to be useful to anybody.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rationalizing Query.withCheckOptions

2016-01-04 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> The RLS patch added this to struct Query:
> 
> List  *withCheckOptions;/* a list of WithCheckOption's, which
>  * are only added during rewrite and
>  * therefore are not written out as
>  * part of Query. */
> 
> As per the comment, this field is ignored by outfuncs.c and readfuncs.c.
> That's pretty horrid, because:
> 
> 1. Omitting the field from the output of pprint() seriously impedes
> debugging.  I could probably have wasted significantly fewer hours
> yesterday before figuring out fd195257 if pprint() hadn't been lying
> to me about what was in the query data structures.

It used to be.  It was removed in 4158cc3, per our discussion about
that being essentially dead code which wouldn't be exercised as it'll
always be NIL.

> 2. At some point, this will break parallel queries, or perhaps just
> cause them to silently fail to enforce RLS, because we depend on
> outfuncs/readfuncs to transfer node trees to parallel workers.
> (I think there's no live bug today because we only transfer Plans
> not Queries, but surely this is a loaded foot-gun.)

I saw Robert's commit and had a similar concern and similar conclusion
that I don't believe it's an issue today.

> 3. A node type as fundamental as Query ought not have weird gotchas
> like this.

> Furthermore, as far as I can see there's actually no point at all to the
> special exception.  As the comment says, the list does not get populated
> until rewrite time, which means that if outfuncs/readfuncs just process it
> normally, it would always be NIL in stored views anyway.

I don't have any problem with it being included as long as everyone's
fine with it ending up on disk as an always-NIL value.

> It's too late to change this in 9.5, but I think we should do so
> in HEAD.

I'd be happy to revert those bits of 4158cc37 and update the comment
accordingly.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Pavel Stehule
Hi

2016-01-04 12:46 GMT+01:00 Shulgin, Oleksandr 
:

> On Wed, Dec 30, 2015 at 8:28 PM, Pavel Stehule 
> wrote:
>
>>
>>
>> 2015-12-30 17:33 GMT+01:00 Robert Haas :
>>
>>> On Mon, Dec 28, 2015 at 8:45 AM, Shulgin, Oleksandr
>>>  wrote:
>>> > I didn't check out earlier versions of this patch, but the latest one
>>> still
>>> > changes pg_size_pretty() to emit PB suffix.
>>> >
>>> > I don't think it is worth it to throw a number of changes together like
>>> > that.  We should focus on adding pg_size_bytes() first and make it
>>> > compatible with both pg_size_pretty() and existing GUC units: that is
>>> > support suffixes up to TB and make sure they have the meaning of
>>> powers of
>>> > 2^10, not 10^3.  Re-using the table present in guc.c would be a plus.
>>> >
>>> > Next, we could think about adding handling of PB suffix on input and
>>> output,
>>> > but I don't see a big problem if that is emitted as 1024TB or the user
>>> has
>>> > to specify it as 1024TB in a GUC or argument to pg_size_bytes(): an
>>> minor
>>> > inconvenience only.
>>>
>>> +1 to everything in this email.
>>>
>>
>> so I removed support for PB and SI units. Now the
>> memory_unit_conversion_table is shared.
>>
>
> Looks better, thanks.
>
> I'm not sure why the need to touch the regression test for
> pg_size_pretty():
>
> !10.5 | 10.5 bytes | -10.5 bytes
> !  1000.5 | 1000.5 bytes   | -1000.5 bytes
> !   100.5 | 977 kB | -977 kB
> !10.5 | 954 MB | -954 MB
> ! 1.5 | 931 GB | -931 GB
> !  1000.5 | 909 TB | -909 TB
>
>
fixed


> A nitpick, this loop:
>
> + while (*cp)
> + {
> + if ((isdigit(*cp) || *cp == '.') && ndigits < MAX_DIGITS)
> + digits[ndigits++] = *cp++;
> + else
> + break;
> + }
>
> would be a bit easier to parse if spelled as:
>
> + while (*cp && (isdigit(*cp) || *cp == '.') && ndigits < MAX_DIGITS)
> + digits[ndigits++] = *cp++;
>

fixed


>
> On the other hand, this seems to truncate the digits silently:
>
> + digits[ndigits] = '\0';
>
> I don't think we want that, e.g:
>
> postgres=# select pg_size_bytes('9223372036854775807.9');
> ERROR:  invalid unit "9"
> HINT:  Valid units for this parameter are "kB", "MB", "GB", and "TB".
>
> I think making a mutable copy of the input string and truncating it before
> passing to numeric_in() would make more sense--no need to hard-code
> MAX_DIGITS.  The same goes for hard-coding MAX_UNIT_LEN, e.g. compare the
> following two outputs:
>
> postgres=# select pg_size_bytes('1 KiB');
> ERROR:  invalid unit "KiB"
> HINT:  Valid units for this parameter are "kB", "MB", "GB", and "TB".
>
> postgres=# select pg_size_bytes('1024 bytes');
> ERROR:  invalid format
>
>
fixed


> I believe we should see a similar error message and a hint in the latter
> case.  (No, I don't think we should add support for 'bytes' as a unit, not
> even for "compatibility" with pg_size_pretty()--for one, I don't think it
> would be wise to expect pg_size_bytes() to be able to deparse *every*
> possible output produced by pg_size_pretty() as it's purpose is
> human-readable display; also, pg_size_pretty() can easily produce output
> that doesn't fit into bigint type, or is just negative)
>
> Code comments and doc change need proof-reading by a native English
> speaker, which I am not.
>


Regards

Pavel


>
> --
> Alex
>
>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 8ef9fce..6b921ae
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17607,17612 
--- 17607,17615 
  pg_relation_size
 
 
+ pg_size_bytes
+
+
  pg_size_pretty
 
 
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17677,17682 
--- 17680,17696 
 


+
+ pg_size_bytes(text)
+ 
+bigint
+
+  Converts a size in human-readable format with size units
+  into bytes.  The parameter is case insensitive string. Following
+  units are supported: kB, MB, GB, TB.
+
+   
+   
 
  pg_size_pretty(bigint)
  
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 2084692..ce97467
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
***
*** 25,30 
--- 25,31 
  #include "storage/fd.h"
  #include "utils/acl.h"
  #include "utils/builtins.h"
+ #include "utils/guc.h"
  #include "utils/numeric.h"
  #include "utils/rel.h"
  #include "utils/relfilenodemap.h"
*** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 700,705 
--- 701,811 
  }
  
  /*
+  * Convert human readable size to long int.
+  *
+  * Due suppor decimal value and case insensitivity of units
+  * a 

Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Tom Lane
Andres Freund  writes:
> I wonder if we ought to backport this further: e.g. walsender
> continously uses nonblocking sockets via pq_getbyte_if_available(). On
> the other hand I can't immediately see a problem with that, besides
> differing messages on windows/the rest of the world.

I'm slightly worried about breaking 3rd-party code that might be using
recv() and somehow expecting the current behavior.  However, it's equally
arguable that such code would have Windows-specific problems that would be
fixed by the patch.  Now that we've seen a successful round of buildfarm
results, I'd be okay with back-patching 90e61df8 personally.

Any other opinions out there?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building pg_xlogdump reproducibly

2016-01-04 Thread Andres Freund
Hi,

On 2016-01-04 15:59:46 +0100, Christoph Berg wrote:
> The list of objects used to link pg_xlogdump is coming from
> $(wildcard *desc.c) which returns them in filesystem order. This makes
> the build result depend on this ordering, yielding different
> compilation results.

> -RMGRDESCSOURCES = $(notdir $(wildcard 
> $(top_srcdir)/src/backend/access/rmgrdesc/*desc.c))
> +RMGRDESCSOURCES = $(sort $(notdir $(wildcard 
> $(top_srcdir)/src/backend/access/rmgrdesc/*desc.c)))
>  RMGRDESCOBJS = $(patsubst %.c,%.o,$(RMGRDESCSOURCES))

That's probably not the only non-deterministic rule in postgres, given
nobody paid attention tot that so far? At least transform modules added
in 9.5 (hstore_plpython et al) look like they might similar issues.

Wonder if we should instead define a wildcard wrapper in
Makefile.global.in that does the sorting, including an explanation?

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule  wrote:
> [ new patch ]

+ case '-':
+ ereport(ERROR,
+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+  errmsg("size cannot be negative")));

Why not?  I bet if you copy any - sign to the buffer, this will Just Work.

+ if ( conv->base_unit == GUC_UNIT_KB &&

Whitespace.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Building pg_xlogdump reproducibly

2016-01-04 Thread Christoph Berg
The list of objects used to link pg_xlogdump is coming from
$(wildcard *desc.c) which returns them in filesystem order. This makes
the build result depend on this ordering, yielding different
compilation results.

This patch fixes the reproducibility issue:

--- a/src/bin/pg_xlogdump/Makefile
+++ b/src/bin/pg_xlogdump/Makefile
@@ -12,7 +12,7 @@ OBJS = pg_xlogdump.o compat.o xlogreader
 
 override CPPFLAGS := -DFRONTEND $(CPPFLAGS)
 
-RMGRDESCSOURCES = $(notdir $(wildcard 
$(top_srcdir)/src/backend/access/rmgrdesc/*desc.c))
+RMGRDESCSOURCES = $(sort $(notdir $(wildcard 
$(top_srcdir)/src/backend/access/rmgrdesc/*desc.c)))
 RMGRDESCOBJS = $(patsubst %.c,%.o,$(RMGRDESCSOURCES))
 
 

Spotted by Debian's reproducible builds project:
https://wiki.debian.org/ReproducibleBuilds
https://reproducible-builds.org/

Christoph


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building pg_xlogdump reproducibly

2016-01-04 Thread David Fetter
On Mon, Jan 04, 2016 at 04:51:25PM +0100, Andres Freund wrote:
> Hi,
> 
> On 2016-01-04 15:59:46 +0100, Christoph Berg wrote:
> > The list of objects used to link pg_xlogdump is coming from
> > $(wildcard *desc.c) which returns them in filesystem order. This makes
> > the build result depend on this ordering, yielding different
> > compilation results.
> 
> > -RMGRDESCSOURCES = $(notdir $(wildcard 
> > $(top_srcdir)/src/backend/access/rmgrdesc/*desc.c))
> > +RMGRDESCSOURCES = $(sort $(notdir $(wildcard 
> > $(top_srcdir)/src/backend/access/rmgrdesc/*desc.c)))
> >  RMGRDESCOBJS = $(patsubst %.c,%.o,$(RMGRDESCSOURCES))
> 
> That's probably not the only non-deterministic rule in postgres, given
> nobody paid attention tot that so far? At least transform modules added
> in 9.5 (hstore_plpython et al) look like they might similar issues.
> 
> Wonder if we should instead define a wildcard wrapper in
> Makefile.global.in that does the sorting, including an explanation?

That sounds like it will avert a lot of pain in the future, and the
sort overhead is negligible compared to the build time.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Pavel Stehule
2016-01-04 16:51 GMT+01:00 Robert Haas :

> On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule 
> wrote:
> > [ new patch ]
>
> + case '-':
> + ereport(ERROR,
> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> +  errmsg("size cannot be negative")));
>
> Why not?  I bet if you copy any - sign to the buffer, this will Just Work.
>

true, fixed


>
> + if ( conv->base_unit == GUC_UNIT_KB &&
>
> Whitespace.
>

I don't see it ??

Regards

Pavel


>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 8ef9fce..6b921ae
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17607,17612 
--- 17607,17615 
  pg_relation_size
 
 
+ pg_size_bytes
+
+
  pg_size_pretty
 
 
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17677,17682 
--- 17680,17696 
 


+
+ pg_size_bytes(text)
+ 
+bigint
+
+  Converts a size in human-readable format with size units
+  into bytes.  The parameter is case insensitive string. Following
+  units are supported: kB, MB, GB, TB.
+
+   
+   
 
  pg_size_pretty(bigint)
  
diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 2084692..0b7af3c
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
***
*** 25,30 
--- 25,31 
  #include "storage/fd.h"
  #include "utils/acl.h"
  #include "utils/builtins.h"
+ #include "utils/guc.h"
  #include "utils/numeric.h"
  #include "utils/rel.h"
  #include "utils/relfilenodemap.h"
*** pg_size_pretty_numeric(PG_FUNCTION_ARGS)
*** 700,705 
--- 701,808 
  }
  
  /*
+  * Convert human readable size to long int.
+  *
+  * Due suppor decimal value and case insensitivity of units
+  * a function parse_intcannot be used.
+  */
+ Datum
+ pg_size_bytes(PG_FUNCTION_ARGS)
+ {
+ 	text		   *arg = PG_GETARG_TEXT_PP(0);
+ 	char		   *str = text_to_cstring(arg);
+ 	char	*strptr = str;
+ 	char		   *buffer;
+ 	char	*bufptr;
+ 	Numeric			num;
+ 	int64 result;
+ 
+ 	/* working buffer cannot be longer than original string */
+ 	buffer = (char *) palloc(VARSIZE_ANY_EXHDR(arg) + 1);
+ 	bufptr = buffer;
+ 
+ 	/* Skip leading spaces */
+ 	while (isspace((unsigned char) *strptr))
+ 		strptr++;
+ 
+ 	switch (*strptr)
+ 	{
+ 		/* ignore plus symbol */
+ 		case '+':
+ 		case '-':
+ 			*bufptr++ = *strptr++;
+ 			break;
+ 	}
+ 
+ 	/* copy digits to working buffer */
+ 	while (*strptr && (isdigit(*strptr) || *strptr == '.'))
+ 		*bufptr++ = *strptr++;
+ 	*bufptr = '\0';
+ 
+ 	/* don't allow empty string */
+ 	if (*buffer == '\0')
+ 			ereport(ERROR,
+ 	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 	 errmsg("\"%s\" is not number", str)));
+ 
+ 	num = DatumGetNumeric(DirectFunctionCall3(numeric_in,
+ 			CStringGetDatum(buffer), 0, -1));
+ 
+ 	/* allow whitespace between integer and unit */
+ 	while (isspace(*strptr))
+ 		strptr++;
+ 
+ 	/* Handle possible unit */
+ 	if (*strptr != '\0')
+ 	{
+ 		int		multiplier;
+ 		Numeric			mul_num;
+ 		const char	*hintmsg;
+ 		const char *unitstr = strptr;
+ 
+ 		bufptr = buffer;
+ 
+ 		/* copy chars to buffer and stop on space */
+ 		while (*strptr && !isspace(*strptr))
+ 			*bufptr++ = *strptr++;
+ 		*bufptr = '\0';
+ 
+ 		/*
+ 		 * Use buffer as unit if there are not any nonspace char,
+ 		 * else use a original unit string.
+ 		 */
+ 		while (isspace(*strptr))
+ 			strptr++;
+ 		if (*strptr == '\0')
+ 			unitstr = buffer;
+ 
+ 		if (!parse_memory_unit(unitstr, , ))
+ 			ereport(ERROR,
+ 	(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ 	 errmsg("invalid unit: \"%s\"", unitstr),
+ 	 errhint("%s", _(hintmsg;
+ 
+ 		/*
+ 		 * Now, the multiplier is in KB unit. It should be multiplied by 1024
+ 		 * before usage
+ 		 */
+ 		mul_num = DatumGetNumeric(DirectFunctionCall1(int8_numeric,
+ 		Int64GetDatum(multiplier * 1024L)));
+ 
+ 		num = DatumGetNumeric(DirectFunctionCall2(numeric_mul,
+ 			NumericGetDatum(mul_num),
+ 			NumericGetDatum(num)));
+ 	}
+ 
+ 	result = DatumGetInt64(DirectFunctionCall1(numeric_int8, NumericGetDatum(num)));
+ 
+ 	pfree(buffer);
+ 	pfree(str);
+ 
+ 	PG_RETURN_INT64(result);
+ }
+ 
+ /*
   * Get the filenode of a relation
   *
   * This is expected to be used in queries like
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
new file mode 100644
index 38ba82f..00021fd
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
*** convert_from_base_unit(int64 base_value,
*** 5238,5243 
--- 5238,5272 
  
  
  /*
+  * Parse value as some known 

Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Shulgin, Oleksandr
On Mon, Jan 4, 2016 at 6:03 PM, Pavel Stehule 
wrote:
>
> 2016-01-04 17:48 GMT+01:00 Shulgin, Oleksandr <
oleksandr.shul...@zalando.de>:
>>
>> On Mon, Jan 4, 2016 at 4:51 PM, Robert Haas 
wrote:
>>>
>>> On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule 
wrote:
>>> > [ new patch ]
>>>
>>> + case '-':
>>> + ereport(ERROR,
>>> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
>>> +  errmsg("size cannot be negative")));
>>>
>>> Why not?  I bet if you copy any - sign to the buffer, this will Just
Work.
>>
>>
>> I'm also inclined on dropping that explicit check for empty string below
and let numeric_in() error out on that.  Does this look OK, or can it
confuse someone:
>>
>> postgres=# select pg_size_bytes('');
>> ERROR:  invalid input syntax for type numeric: ""
>>
>> ?
>>
>>> + if ( conv->base_unit == GUC_UNIT_KB &&
>>
>>
>> Between "(" and "conv->..." I believe.
>
>
> both fixed

Hm...

> + switch (*strptr)
> + {
> + /* ignore plus symbol */
> + case '+':
> + case '-':
> + *bufptr++ = *strptr++;
> + break;
> + }

Well, to that amount you don't need any special checks, I'm just not sure
if reported error message is not misleading if we let numeric_in() handle
all the errors.  At least it can cope with the leading spaces, +/- and
empty input quite well.

--
Alex


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 11:48 AM, Shulgin, Oleksandr
 wrote:
> On Mon, Jan 4, 2016 at 4:51 PM, Robert Haas  wrote:
>>
>> On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule 
>> wrote:
>> > [ new patch ]
>>
>> + case '-':
>> + ereport(ERROR,
>> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
>> +  errmsg("size cannot be negative")));
>>
>> Why not?  I bet if you copy any - sign to the buffer, this will Just Work.
>
>
> I'm also inclined on dropping that explicit check for empty string below and
> let numeric_in() error out on that.  Does this look OK, or can it confuse
> someone:
>
> postgres=# select pg_size_bytes('');
> ERROR:  invalid input syntax for type numeric: ""

I think that's a pretty bad error message.  I mean, the user is
calling a function that takes text as an input data type.  So, where's
numeric involved?

I'm also kind of wondering what the intended use case for this
function is.  Why do we want it?  Do we want it?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Some 9.5beta2 backend processes not terminating properly?

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 10:59 AM, Tom Lane  wrote:
> Robert Haas  writes:
>> OK, well, if the consensus is in favor of a back-patch, so be it.  It
>> seems a little strange to me to back-patch a commit that doesn't fix
>> anything, but I just work here.
>
> Well, it's true that we can't point to specific field reports and say
> that this will fix those.  But it's not like our Windows port is so
> rock-solid-reliable that we should give it the benefit of the doubt
> about existing behaviors being correct.  We do know that the code path
> in question is used in previous branches --- we put it there for a
> reason --- and I think it's probably possible that it gets exercised
> in corner cases, even pre-9.5.

Yeah, I'm just worried about collateral damage.  If you're convinced
that there won't be any, have at it.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Shulgin, Oleksandr
On Mon, Jan 4, 2016 at 6:14 PM, Robert Haas  wrote:

> On Mon, Jan 4, 2016 at 11:48 AM, Shulgin, Oleksandr
> >
> > postgres=# select pg_size_bytes('');
> > ERROR:  invalid input syntax for type numeric: ""
>
> I think that's a pretty bad error message.  I mean, the user is
> calling a function that takes text as an input data type.  So, where's
> numeric involved?
>

Is there a way to force CONTEXT output in the reported error?  I guess that
could help.

I'm also kind of wondering what the intended use case for this
> function is.  Why do we want it?  Do we want it?
>

As suggested above a usecase could be like the following:

SELECT relname FROM pg_class WHERE pg_relation_size(oid) >
pg_size_bytes('100 GB');

I think it's neat and useful.

--
Alex


Re: [HACKERS] 2016-01 Commitfest

2016-01-04 Thread Alvaro Herrera
Here are the current numbers, now that the commitfest has actually
closed for business:

 Needs review: 79.
 Waiting on Author: 5.
 Ready for Committer: 6.
 Committed: 8.
Total: 98.

Of those RfC patches, one ("Default Roles") doesn't actually seem ready
to commit, since there's been some significant discussion regarding the
underlying design.  I'm setting it back to Needs Review while the
discussion continues.

Patch "Speedup timestamp/time/date output functions" in RfC status has a
committer assigned; I hope this one to be over soon.

The bulk, of course, is still in need of input from reviewers, so let's
keep it moving!

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] remove wal_level archive

2016-01-04 Thread Alvaro Herrera
Peter Eisentraut wrote:
> So we've had several rounds of discussions about simplifying replication
> configuration in general and the wal_level setting in particular. [0][1]
>  Let's get something going.

I looked at this patch, which I think has got enough consensus that you
should just push forward with the proposed design -- in particular, just
remove one of archive or hot_standby values, not keep it as a synonym of
the other.  If we're counting votes, I prefer keeping hot_standby over
archive.

The patch is nicely compact and applies, with only some fuzz.

I agree with changing all parts that say "XYZ or higher" to enumerate
the possible values.

It may be a good idea to have a look at Michael Paquier's recovery test
framework ( also in this commitfest: https://commitfest.postgresql.org/8/438/ )
and see how that is affected by this patch.  Maybe the tests can find a
problem in this patch, and so perhaps you'd like to commit the tests
first, then this change on top.

I'm marking this as Ready for Committer, and setting you up as such for
this patch.  If you would prefer not to commit, let me know and I can do
so.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: pglogical_output - a general purpose logical decoding output plugin

2016-01-04 Thread Shulgin, Oleksandr
On Sun, Jan 3, 2016 at 7:21 PM, Tomasz Rybak  wrote:

> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:   not tested
> Spec compliant:   not tested
> Documentation:tested, failed
>
> Applies cleanly on current master
> (b416c0bb622ce5d33fdbec3bbce00451132f10ec).
>
> Builds without any problems on current Debian unstable (am64 arch, GCC
> 5.3.1-4, glibc 2.21-6)
>

A make from an external build dir fails on install, suggested fix:

install: all
$(MKDIR_P) '$(DESTDIR)$(includedir)'/pglogical_output
- $(INSTALL_DATA) pglogical_output/hooks.h
'$(DESTDIR)$(includedir)'/pglogical_output
+ $(INSTALL_DATA)
$(top_srcdir)/contrib/pglogical_output/pglogical_output/hooks.h
'$(DESTDIR)$(includedir)'/pglogical_output

--
Alex


Re: [HACKERS] custom function for converting human readable sizes to bytes

2016-01-04 Thread Pavel Stehule
2016-01-04 18:14 GMT+01:00 Robert Haas :

> On Mon, Jan 4, 2016 at 11:48 AM, Shulgin, Oleksandr
>  wrote:
> > On Mon, Jan 4, 2016 at 4:51 PM, Robert Haas 
> wrote:
> >>
> >> On Mon, Jan 4, 2016 at 10:17 AM, Pavel Stehule  >
> >> wrote:
> >> > [ new patch ]
> >>
> >> + case '-':
> >> + ereport(ERROR,
> >> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> >> +  errmsg("size cannot be negative")));
> >>
> >> Why not?  I bet if you copy any - sign to the buffer, this will Just
> Work.
> >
> >
> > I'm also inclined on dropping that explicit check for empty string below
> and
> > let numeric_in() error out on that.  Does this look OK, or can it confuse
> > someone:
> >
> > postgres=# select pg_size_bytes('');
> > ERROR:  invalid input syntax for type numeric: ""
>
> I think that's a pretty bad error message.  I mean, the user is
> calling a function that takes text as an input data type.  So, where's
> numeric involved?
>

last version is little bit better

postgres=# select pg_size_bytes('');
ERROR:  22023: "" is not number



>
> I'm also kind of wondering what the intended use case for this
> function is.  Why do we want it?  Do we want it?
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Robert Haas
On Mon, Jan 4, 2016 at 3:07 PM, Stephen Frost  wrote:
> I'm not sure it's entirely relevant now- I've outlined the reasoning in
> my email to Noah as a, hopefully, pretty comprehensive summary.  If that
> doesn't sway your minds then it seems unlikely that a reference to a
> thread from 6 months or a year ago would.  Further, as happens, other
> discussions were in person where I discussed the ideas with other
> hackers at conferences.  I got generally positive responses to the idea
> of default roles with specific sets of rights, which is the path that
> I've been on since.  As with most decisions, there was not a formal vote
> over the proposals for me to reference.  I do specifically recall the
> opinion that sets of privileges probably make more sense than granting
> out individual ones, from Tom, if I'm remembering correctly.

To be honest, that's not really my point.  You have cited off-list
discussions you've had over and over as a reason why you proceeded
down some particular path.  That is fine up to a point - I discuss
lots of things with people off-list, too - but a consensus that a
particular design is acceptable for commit has to mean the consensus
on this mailing list, and nothing else.  In seven years of reading
this mailing list, I can't remember a single other person on this
mailing list saying "I'm going to commit this because I talked to a
bunch of unspecified people at conferences I attended and they liked
it", but you've used essentially that rationale a couple of times now.

> For my 2c, I don't see a default role or two as creating terribly much
> clutter.

I don't believe any version of this proposal has involved only one or
two such roles.  They've all had at least four or five AFAICS.  Now
maybe that's still OK, but 4 or 5 > 1 or 2, and the number is only
likely to go up in the future.

> Changing all of our code that currently has internal checks to
> rely on the external checks and adjusting the permissions on the
> individual functions will be a fair bit of churn though.

I'm not sure it'll really be that bad, but if you have some evidence
that I'm wrong I'd like to hear it.

>> More
>> broadly, I'm not very convinced that even the roles which allow for
>> rights on multiple objects are going to meet with general approval.
>
> There's been rather little oposition to the idea and support when I've
> discussed it.  Of course, that was before it got to the point where I
> was planning to commit it.  Perhaps there will be once it's actually in,
> or maybe not until it's in the wild.  In any case, I continue to feel,
> as others have, that we can make adjustments moving forward.

So, is this another case where the support is all in off-list fora and
thus invisible, or can you point to specific on-list discussions where
it was supported, and to the opinions offered in support?  I don't
really remember many opinions that were any more positive than "I
wouldn't be strongly opposed to this" or "If we're going to do this
then we ought to do it in X way".  I'm happy to be corrected if I'm
misrepresenting the record, but I'd characterize the overall reaction
to this proposal as tepid: nobody hated it, but nobody really loved it
either, and a bunch of mild concerns were offered.

>> There has been enough discussion of which roles should be created and
>> which things should be included in each one, and the overall tenor of
>> that discussion seems to be that different people have different
>> ideas.
>
> Michael had a question about pg_switch_xlog, but he appeared to
> reconsider that position after the subsequent discussion, which put us
> back to essentially the same proposal that I started with, I believe.  I
> don't recall terribly much other discussion or concern about what roles
> should be created or what should be included in each one, though I'd be
> happy to hear your thoughts on what you'd like to see.

Honestly, my vote is for creating only those predefined roles that are
clearly endorsed by three people with different employers, which I
currently believe to be true of none of the proposed roles.  It's not
even that I suspect you or anyone of ballot-stuffing; it's just that
people who work at different companies are likely to work with
different tools and those tools may have different requirements.  I
mean, people at 2ndQuadrant probably mostly use repmgr; people at
Crunchy probably like pgbackrest; people at OmniTI probably use
PITRtools; and EnterpriseDB employees are more likely than average to
be familiar with BART.  If several of those people come together and
say they all agree that predefined role X is perfect for the needs of
all of their respective tools, I'd consider that a really good sign
that we've hit on something that is of general utility.  Otherwise,
I'd just the authors of each tool specify the GRANT EXECUTE ON
FUNCTION lines necessary for their own tool and call it good.  I think
that's almost as convenient and a lot more flexible.

What 

Re: [HACKERS] Additional role attributes && superuser review

2016-01-04 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Mon, Jan 4, 2016 at 3:07 PM, Stephen Frost  wrote:
> > I'm not sure it's entirely relevant now- I've outlined the reasoning in
> > my email to Noah as a, hopefully, pretty comprehensive summary.  If that
> > doesn't sway your minds then it seems unlikely that a reference to a
> > thread from 6 months or a year ago would.  Further, as happens, other
> > discussions were in person where I discussed the ideas with other
> > hackers at conferences.  I got generally positive responses to the idea
> > of default roles with specific sets of rights, which is the path that
> > I've been on since.  As with most decisions, there was not a formal vote
> > over the proposals for me to reference.  I do specifically recall the
> > opinion that sets of privileges probably make more sense than granting
> > out individual ones, from Tom, if I'm remembering correctly.
> 
> To be honest, that's not really my point.  You have cited off-list
> discussions you've had over and over as a reason why you proceeded
> down some particular path.  That is fine up to a point - I discuss
> lots of things with people off-list, too - but a consensus that a
> particular design is acceptable for commit has to mean the consensus
> on this mailing list, and nothing else.  In seven years of reading
> this mailing list, I can't remember a single other person on this
> mailing list saying "I'm going to commit this because I talked to a
> bunch of unspecified people at conferences I attended and they liked
> it", but you've used essentially that rationale a couple of times now.

I've found consensus among folks on the lists for far more of my commits
than I've cited off-list discussions for.  Further, consensus on these
lists is largely in the quiet, which is why I go out of my way to
attempt to engage parties who may be interested when there *isn't* much
discussion on the lists.  I'm trying to do better than simply assuming
consensus based on no feedback to the contrary.  Had I assumed minimal
response meant consensus for this patch, as is typically the norm, this
patch would have been committed six months ago.  Instead, I tried to
engage people at conferences to ensure that there really was consensus
on the approach.

> > For my 2c, I don't see a default role or two as creating terribly much
> > clutter.
> 
> I don't believe any version of this proposal has involved only one or
> two such roles.  They've all had at least four or five AFAICS.  Now
> maybe that's still OK, but 4 or 5 > 1 or 2, and the number is only
> likely to go up in the future.

The 'role or two' was under the expectation that we'd still have default
roles for the more complicated cases (pg_backup, et al) and that we
would only be removing the default role of pg_switch_xlog and
pg_file_settings (the only two which an individual GRANT could replace).

> > Changing all of our code that currently has internal checks to
> > rely on the external checks and adjusting the permissions on the
> > individual functions will be a fair bit of churn though.
> 
> I'm not sure it'll really be that bad, but if you have some evidence
> that I'm wrong I'd like to hear it.

I implemented the discussed pg_dump support for ACLs and looked at the
changes required.  I may not be remembering it entirely, but it's not
that I've not looked at it.

> >> More
> >> broadly, I'm not very convinced that even the roles which allow for
> >> rights on multiple objects are going to meet with general approval.
> >
> > There's been rather little oposition to the idea and support when I've
> > discussed it.  Of course, that was before it got to the point where I
> > was planning to commit it.  Perhaps there will be once it's actually in,
> > or maybe not until it's in the wild.  In any case, I continue to feel,
> > as others have, that we can make adjustments moving forward.
> 
> So, is this another case where the support is all in off-list fora and
> thus invisible, or can you point to specific on-list discussions where
> it was supported, and to the opinions offered in support?  I don't
> really remember many opinions that were any more positive than "I
> wouldn't be strongly opposed to this" or "If we're going to do this
> then we ought to do it in X way".  I'm happy to be corrected if I'm
> misrepresenting the record, but I'd characterize the overall reaction
> to this proposal as tepid: nobody hated it, but nobody really loved it
> either, and a bunch of mild concerns were offered.

I agree that this has largely been the on-list reaction.  To be fair,
it's been largely the off-list reaction also, which I've expressly
tried to seek out, as mentioned above.  I'm not asking anyone to love
it, I'm not entirely convinced it's lovable myself, but I do feel it's
useful and worth making an effort for.

> >> There has been enough discussion of which roles should be created and
> >> which things should be included in each one, and the overall tenor of

Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Bruce Momjian
On Sun, Jan  3, 2016 at 12:35:02PM -0500, Tom Lane wrote:
> Robert Haas  writes:
> > On Dec 31, 2015, at 1:04 PM, Bruce Momjian  wrote:
> >> Let's hold this for 9.5.1 and all minor releases will get it at the same
> >> time.
> 
> > I vote for going ahead with this at once. It seems low risk, and having 
> > 9.5.1 install different files from 9.5.0 seems like an unnecessary 
> > annoyance.
> 
> If we're willing to allow 9.4.6 to install different files than 9.4.5
> does, I don't see why it's a problem for 9.5.1.  But having said that,
> I agree that this seems pretty low-risk, and so IMO we might as well
> ship it sooner not later.

Well, as I said, I can't test the patch, which made me lean toward
9.5.1.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Tom Lane
Bruce Momjian  writes:
> On Sun, Jan  3, 2016 at 12:35:02PM -0500, Tom Lane wrote:
>> If we're willing to allow 9.4.6 to install different files than 9.4.5
>> does, I don't see why it's a problem for 9.5.1.  But having said that,
>> I agree that this seems pretty low-risk, and so IMO we might as well
>> ship it sooner not later.

> Well, as I said, I can't test the patch, which made me lean toward
> 9.5.1.

That's what the buildfarm is for ... but ...

I would have been fine with you pushing this yesterday, but now it's
too late to get a buildfarm cycle on it.  Please hold for 9.5.1.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynloader.h missing in prebuilt package for Windows?

2016-01-04 Thread Bruce Momjian
On Mon, Jan  4, 2016 at 12:59:26PM -0500, Tom Lane wrote:
> Bruce Momjian  writes:
> > On Sun, Jan  3, 2016 at 12:35:02PM -0500, Tom Lane wrote:
> >> If we're willing to allow 9.4.6 to install different files than 9.4.5
> >> does, I don't see why it's a problem for 9.5.1.  But having said that,
> >> I agree that this seems pretty low-risk, and so IMO we might as well
> >> ship it sooner not later.
> 
> > Well, as I said, I can't test the patch, which made me lean toward
> > 9.5.1.
> 
> That's what the buildfarm is for ... but ...
> 
> I would have been fine with you pushing this yesterday, but now it's
> too late to get a buildfarm cycle on it.  Please hold for 9.5.1.

Oh, I forgot about the buildfarm testing.  Good point.  OK, hold for
9.5.1.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >