Re: [HACKERS] Let PostgreSQL's On Schedule checkpoint write buffer smooth spread cycle by tuning IsCheckpointOnSchedule?

2016-01-17 Thread Fabien COELHO



Coming in late here, but I always thought the fact that the FPW happen
mostly at the start of the checkpoint, and the checkpoint writes/fsyncs
happen mostly in the first half of the checkpoint period was always
suboptimal, i.e. it would be nice of one of these was more active in the
second half of the checkpoint period.  I assume that is what is being
discussed here.


Yes, this is the subject of the thread.

On the one end hand, whether is the first half or first quarter of first 
tenth really depends on the actual load, so how much to rebalance depends 
on that dynamic information. At the beginning there should be a short 
spike for index pages which are quickly reused, and a longer spike about 
data pages depending on the pattern of access and size of table.


On the other hand the rebalancing also depends on the measure chosen to 
know about the overall progress, either WAL writing or time, and their 
behavior is not the same, so this should be taken into account.


My conclusion is that there is no simple static fix to this issue, as 
proposed in the submitted patch. The problem needs thinking and maths.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Improve spinlock inline assembly for x86.

2016-01-17 Thread Andreas Seltenreich
Hi,

I'm currently experimenting with just-in-time compilation using libfirm.
While discussing issues with its developers, it was pointed out to me
that our spinlock inline assembly is less than optimal.  Attached is a
patch that addresses this.

,
| Remove the LOCK prefix from the XCHG instruction.  Locking is implicit
| with XCHG and the prefix wastes a byte.  Also remove the "cc" register
| from the clobber list as the XCHG instruction does not modify any flags.
| 
| Reported by Christoph Mallon.
`

regards,
Andreas

>From c836b4f3e0b60d070481d4061e6fe0ffbe488495 Mon Sep 17 00:00:00 2001
From: Andreas Seltenreich 
Date: Sun, 17 Jan 2016 11:51:53 +0100
Subject: [PATCH] Improve spinlock inline assembly for x86.

Remove the LOCK prefix from the XCHG instruction.  Locking is implicit
with XCHG and the prefix wastes a byte.  Also remove the "cc" register
from the clobber list as the xchg instruction does not modify any
flags.

Reported by Christoph Mallon.
---
 src/include/storage/s_lock.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/src/include/storage/s_lock.h b/src/include/storage/s_lock.h
index 8b240cd..933bb76 100644
--- a/src/include/storage/s_lock.h
+++ b/src/include/storage/s_lock.h
@@ -158,7 +158,6 @@ tas(volatile slock_t *lock)
 	__asm__ __volatile__(
 		"	cmpb	$0,%1	\n"
 		"	jne		1f		\n"
-		"	lock			\n"
 		"	xchgb	%0,%1	\n"
 		"1: \n"
 :		"+q"(_res), "+m"(*lock)
@@ -226,11 +225,10 @@ tas(volatile slock_t *lock)
 	register slock_t _res = 1;
 
 	__asm__ __volatile__(
-		"	lock			\n"
 		"	xchgb	%0,%1	\n"
 :		"+q"(_res), "+m"(*lock)
 :		/* no inputs */
-:		"memory", "cc");
+:		"memory");
 	return (int) _res;
 }
 
-- 
2.1.4


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Freeze avoidance of very large table.

2016-01-17 Thread Masahiko Sawada
On Wed, Jan 13, 2016 at 12:16 AM, Masahiko Sawada  wrote:
> On Mon, Dec 28, 2015 at 6:38 PM, Masahiko Sawada  
> wrote:
>> On Mon, Dec 21, 2015 at 11:54 PM, Robert Haas  wrote:
>>> On Mon, Dec 21, 2015 at 3:27 AM, Kyotaro HORIGUCHI
>>>  wrote:
 Hello,

 At Fri, 18 Dec 2015 12:09:43 -0500, Robert Haas  
 wrote in 
 
> On Thu, Dec 17, 2015 at 1:17 AM, Michael Paquier
>  wrote:
> > I am not really getting the meaning of this sentence. Shouldn't this
> > be reworded something like:
> > "Freezing occurs on the whole table once all pages of this relation 
> > require it."
>
> That statement isn't remotely true, and I don't think this patch
> changes that.  Freezing occurs on the whole table once relfrozenxid is
> old enough that we think there might be at least one page in the table
> that requires it.

 I doubt I can explain this accurately, but I took the original
 phrase as that if and only if all pages of the table are marked
 as "requires freezing" by accident, all pages are frozen. It's
 quite obvious but it is what I think "happen to require freezing"
 means. Does this make sense?

 The phrase might not be necessary if this is correct.
>>>
>>> Maybe you are trying to say something like "only those pages which
>>> require freezing are frozen?".
>>>
>>
>> I was thinking the same as what Horiguchi-san said.
>> That is, even if relfrozenxid is old enough, freezing on the whole
>> table is not required if the table are marked as "not requires
>> freezing".
>> In other word, only those pages which are marked as "not frozen" are frozen.
>>
>
> The recently changes to HEAD conflicts with freeze map patch, so I've
> updated and attached latest freeze map patch.
> The another patch that enhances the debug log message of visibilitymap
> is attached to previous mail.
> .
>
> Please review it.
>

Attached updated version patch.
Please review it.

Regards,

--
Masahiko Sawada


000_add_frozen_bit_into_visibilitymap_v33.patch
Description: binary/octet-stream


001_enhance_visibilitymap_debug_messages_v1.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Log operating system user connecting via unix socket

2016-01-17 Thread José Arthur Benetasso Villanova
Greetings, gentlemen.

Here in my work, we have about 100 PostgreSQL machines and about 20 users
with superuser privileges.

This group of 20 people change constantly, so it's cumbersome create a role
for each. Instead, we map all of then in pg_ident.conf.

The problem is: with current postgres log, I just know that a postgres user
connect, but I don't know which one is in case that more than one user is
logged in the server.

This simple log line can create the relations needed for an audit.

Feel free to comment and criticize.


-- 
José Arthur Benetasso Villanova
diff --git a/src/backend/libpq/auth.c b/src/backend/libpq/auth.c
index 57c2f48..db111e0 100644
--- a/src/backend/libpq/auth.c
+++ b/src/backend/libpq/auth.c
@@ -1610,6 +1610,9 @@ auth_peer(hbaPort *port)
 
 	strlcpy(ident_user, pw->pw_name, IDENT_USERNAME_MAX + 1);
 
+	ereport(LOG,
+			(errmsg("Received a unix socket connection from %s", ident_user)));
+
 	return check_usermap(port->hba->usermap, port->user_name, ident_user, false);
 }
 #endif   /* HAVE_UNIX_SOCKETS */

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-17 Thread Masahiko Sawada
On Wed, Jan 13, 2016 at 1:54 AM, Alvaro Herrera
 wrote:
> Michael Paquier wrote:
>> On Fri, Jan 8, 2016 at 1:53 PM, Kyotaro HORIGUCHI
>>  wrote:
>> > Hello,
>> >
>> > At Mon, 4 Jan 2016 15:29:34 +0900, Michael Paquier 
>> >  wrote in 
>> > 
>> >>
>> >> Something that I find rather scary with this patch: could it be
>> >> possible to get actual regression tests now that there is more
>> >> machinery with PostgresNode.pm? As syncrep code paths get more and
>> >> more complex, so are debugging and maintenance.
>> >
>> > The test on the whole replication system will very likely to be
>> > too complex and hard to stabilize, and would be
>> > disproportionately large to other tests.
>>
>> I don't buy that much. Mind you, there is in this commit fest a patch
>> introducing a basic regression test suite for recovery using the new
>> infrastructure that has been committed last month. You may want to
>> look at it.
>
> Kyotaro, please have a look at this patch:
> https://commitfest.postgresql.org/8/438/
> which is the recovery test framework Michael is talking about.  Is it
> possible to use that framework to write tests for this feature?  If so,
> then my preferred course of action would be to commit that patch and
> then introduce in this patch some additional tests for the N-sync-standby
> feature.  Can you please have a look at the test framework patch and
> provide your feedback on how usable it is for this?
>

I had a look that patch.
I'm planning to have at least following tests for multiple synchronous
replication.

* Confirm value of pg_stat_replication.sync_state (sync, async or potential)
* Confirm that the data is synchronously replicated to multiple
standbys in same cases.
  * case 1 : The standby which is not listed in s_s_name, is down
  * case 2 : The standby which is listed in s_s_names but potential
standby, is down
  * case 3 : The standby which is considered as sync standby, is down.
* Standby promotion

In order to confirm that the commit isn't done in case #3 forever
unless new sync standby is up, I think we need the framework that
cancels executing query.
That is, what I'm planning is,

1. Set up master server (s_s_name = '2, standby1, standby2)
2. Set up two standby servers
3. Standby1 is down
4. Create some contents on master (But transaction is not committed)
5. Cancel the #4 query. (Also confirm that the flush location of only
standby2 makes progress)

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Rework access method interface

2016-01-17 Thread Tom Lane
Amit Kapila  writes:
> Shouldn't we try to move amhandler function as well along with
> amvalidate?  I think moving each am's handler and validate into
> am specific new file can make this arrangement closer to what
> we have for PL's (ex. we have plpgsql_validator and plpgsql_call_
> handler in pl_handler.c and similar handler and validator functions
> for other languages in their corresponding modules).

I feel no great need to move the amhandler functions, and if we did,
I would not want to put them into the same files as the amvalidate
functions.  As I said before, the latter are appendages to the AMs
that really don't have anything to do with the core index access code.
They require very different sets of #include files, for instance.

So I see the AMs as containing three separate subsets of code:
core index access/maintenance, amcostestimate, and amvalidate.
The second and third really need to be in separate files because of
#include footprint considerations, but the amhandler function can
perfectly well go in with the first group.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Log operating system user connecting via unix socket

2016-01-17 Thread Stephen Frost
José,

* José Arthur Benetasso Villanova (jose.art...@gmail.com) wrote:
> Here in my work, we have about 100 PostgreSQL machines and about 20 users
> with superuser privileges.

Sounds pretty common.  What kind of superuser rights are they using?
What is the minimum set of rights that are required for these users
(which may break out into different groups, naturally).  We're looking
at ways to provide access for certain operations to non-superusers, to
reduce the number of superuser accounts required.

> This group of 20 people change constantly, so it's cumbersome create a role
> for each. Instead, we map all of then in pg_ident.conf.

Do these 20 individuals have 'regular' accounts also?  Have you
considered granting them a superuser role which they could 'set role'
to when they need to perform a superuser operation?

> The problem is: with current postgres log, I just know that a postgres user
> connect, but I don't know which one is in case that more than one user is
> logged in the server.

Understood, that's unfortunate.

> This simple log line can create the relations needed for an audit.
> 
> Feel free to comment and criticize.

What I think we really want here is logging of the general 'system
user' for all auth methods instead of only for the 'peer' method.
Knowing who connected via Kerberos is quite valuable also, for example.

My thinking would be to add 'system_user' to the Peer struct and then
log the system user in PerformAuthentication.  Another thought might be
to replace peer_cn with 'peer_user' (it's the same thing for client cert
SSL connections, after all..) and then log it and also make it available
in pg_stat_activity.

These are a bit off-the-cuff comments, but hopefully make sense and
provide the right direction to be looking in.  The other thing to
consider is how this information is reflected in the CSV log and/or
log_line_prefix..

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Log operating system user connecting via unix socket

2016-01-17 Thread Tom Lane
Stephen Frost  writes:
> What I think we really want here is logging of the general 'system
> user' for all auth methods instead of only for the 'peer' method.

Well, we don't really know that except in a small subset of auth
methods.  I agree that when we do know it, it's useful info to log.

My big beef with the proposed patch is that the log message is emitted
unconditionally.  There are lots and lots of users who feel that during
normal operation, *zero* log messages should get emitted.  Those villagers
would be on our doorsteps with pitchforks if we shipped this patch as-is.

I would propose that this information should be emitted only when
log_connections is enabled, and indeed that it should be part of the
log_connections message not a separate message.  So this leads to
thinking that somehow, the code for individual auth methods should
be able to return an "additional info" field for inclusion in
log_connections.  We already have such a concept for auth failures,
cf commit 5e0b5dcab.

> ... and also make it available in pg_stat_activity.

That's moving the goalposts quite a bit, and I'm not sure it's necessary
or even desirable.  Let's just get this added to log_connections output,
and then see if there's field demand for more.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Log operating system user connecting via unix socket

2016-01-17 Thread Stephen Frost
Tom,

* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Stephen Frost  writes:
> > What I think we really want here is logging of the general 'system
> > user' for all auth methods instead of only for the 'peer' method.
> 
> Well, we don't really know that except in a small subset of auth
> methods.  I agree that when we do know it, it's useful info to log.

Right.

> My big beef with the proposed patch is that the log message is emitted
> unconditionally.  There are lots and lots of users who feel that during
> normal operation, *zero* log messages should get emitted.  Those villagers
> would be on our doorsteps with pitchforks if we shipped this patch as-is.

Agreed.

> I would propose that this information should be emitted only when
> log_connections is enabled, and indeed that it should be part of the
> log_connections message not a separate message.  So this leads to
> thinking that somehow, the code for individual auth methods should
> be able to return an "additional info" field for inclusion in
> log_connections.  We already have such a concept for auth failures,
> cf commit 5e0b5dcab.

Apologies if it wasn't clear, but that's exactly what I was suggesting
by saying to add it to PerformAuthentication, which is where we emit
the connection info when log_connections is enabled. 

> > ... and also make it available in pg_stat_activity.
> 
> That's moving the goalposts quite a bit, and I'm not sure it's necessary
> or even desirable.  Let's just get this added to log_connections output,
> and then see if there's field demand for more.

This was in context of peer_cn, which is just a specific "system user"
value and which we're already showing in pg_stat_* info tables.  I'd
love to have the Kerberos principal available, but I don't think it'd
make sense to have a 'pg_stat_kerberos' just for that.

I agree that it's moving the goalposts for this patch and could be an
independent patch, but I don't see it as any different, from a
desirability and requirements perspective, than what we're doing for SSL
connections.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Mon, Jan  4, 2016 at 12:55:16PM -0500, Stephen Frost wrote:
> I'd like to be able to include, in both of those, a simple set of
> instructions for granting the necessary rights to the user who is
> running those processes.  A set of rights which an administrator can go
> look up and easily read and understand the result of those grants.  For
> example:
> 
...
> pgbackrest:
> 
>   To run pgbackrest as a non-superuser and not the 'postgres' system
>   user, grant the pg_backup role to the backrest user and ensure the
>   backrest system user has read access to the database files (eg: by
>   having the system user be a member of the 'postgres' group):
--

Just to clarify, the 'postgres' OS user group cannot read the data
directory, e.g.

drwx-- 19 postgres staff 4096 Jan 17 12:19 data/
^^^group

I assume we don't want to change that.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Mon, Jan  4, 2016 at 12:55:16PM -0500, Stephen Frost wrote:
> > I'd like to be able to include, in both of those, a simple set of
> > instructions for granting the necessary rights to the user who is
> > running those processes.  A set of rights which an administrator can go
> > look up and easily read and understand the result of those grants.  For
> > example:
> > 
> ...
> > pgbackrest:
> > 
> >   To run pgbackrest as a non-superuser and not the 'postgres' system
> >   user, grant the pg_backup role to the backrest user and ensure the
> >   backrest system user has read access to the database files (eg: by
> >   having the system user be a member of the 'postgres' group):
> --
> 
> Just to clarify, the 'postgres' OS user group cannot read the data
> directory, e.g.
> 
>   drwx-- 19 postgres staff 4096 Jan 17 12:19 data/
>   ^^^group
> 
> I assume we don't want to change that.

This is going to be distribution dependent, unfortunately.  On
Debian-based distributions, the group is 'postgres' and it'd be
perfectly reasonable to allow that group to read the data directory.

I don't recall offhand if that means we'd have to make changes to allow
that, but, for my 2c, I don't see why we wouldn't allow it to be an
option.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 01:49:19PM -0500, Stephen Frost wrote:
> * Bruce Momjian (br...@momjian.us) wrote:
> > > pgbackrest:
> > > 
> > >   To run pgbackrest as a non-superuser and not the 'postgres' system
> > >   user, grant the pg_backup role to the backrest user and ensure the
> > >   backrest system user has read access to the database files (eg: by
> > >   having the system user be a member of the 'postgres' group):
> > --
> > 
> > Just to clarify, the 'postgres' OS user group cannot read the data
> > directory, e.g.
> > 
> > drwx-- 19 postgres staff 4096 Jan 17 12:19 data/
> > ^^^group
> > 
> > I assume we don't want to change that.
> 
> This is going to be distribution dependent, unfortunately.  On
> Debian-based distributions, the group is 'postgres' and it'd be
> perfectly reasonable to allow that group to read the data directory.

Well, while the group name would be OS-dependent, the lack of any group
permisions in not OS-dependent and is forced by initdb:

umask(S_IRWXG | S_IRWXO);

create_data_directory();

> I don't recall offhand if that means we'd have to make changes to allow
> that, but, for my 2c, I don't see why we wouldn't allow it to be an
> option.

OK, that would be an initdb change then.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Jan 17, 2016 at 01:49:19PM -0500, Stephen Frost wrote:
> > * Bruce Momjian (br...@momjian.us) wrote:
> > > > pgbackrest:
> > > > 
> > > >   To run pgbackrest as a non-superuser and not the 'postgres' system
> > > >   user, grant the pg_backup role to the backrest user and ensure the
> > > >   backrest system user has read access to the database files (eg: by
> > > >   having the system user be a member of the 'postgres' group):
> > > --
> > > 
> > > Just to clarify, the 'postgres' OS user group cannot read the data
> > > directory, e.g.
> > > 
> > >   drwx-- 19 postgres staff 4096 Jan 17 12:19 data/
> > >   ^^^group
> > > 
> > > I assume we don't want to change that.
> > 
> > This is going to be distribution dependent, unfortunately.  On
> > Debian-based distributions, the group is 'postgres' and it'd be
> > perfectly reasonable to allow that group to read the data directory.
> 
> Well, while the group name would be OS-dependent, the lack of any group
> permisions in not OS-dependent and is forced by initdb:
> 
> umask(S_IRWXG | S_IRWXO);
> 
> create_data_directory();

Right, we also check in the backend on startup for certain permissions.
I don't recall offhand if that's forced to 700 or if we allow 750.

> > I don't recall offhand if that means we'd have to make changes to allow
> > that, but, for my 2c, I don't see why we wouldn't allow it to be an
> > option.
> 
> OK, that would be an initdb change then.

It would need to be optional, so distributions and users could choose
which makes sense for their systems.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Proposal: speeding up GIN build with parallel workers

2016-01-17 Thread Constantin S. Pan
On Fri, 15 Jan 2016 15:29:51 -0800
Peter Geoghegan  wrote:

> On Fri, Jan 15, 2016 at 2:38 PM, Constantin S. Pan 
> wrote:
> Even without parallelism, wouldn't it be better if GIN indexes were
> built using tuplesort? I know way way less about the gin am than the
> nbtree am, but I imagine that a prominent cost for GIN index builds is
> constructing the main B-Tree (the one that's constructed over key
> values) itself. Couldn't tuplesort.c be adapted to cover this case?
> That would be much faster in general, particularly with the recent
> addition of abbreviated keys, while also leaving a clear path forward
> to performing the build in parallel.

While building GIN we need a quick way to update the posting list of
the same key, this is where rbtree comes to rescue. Using tuplesort will
require a completely different approach to building the index: dump
(key, itempointer) pairs into a tuplesort heap, then sort the heap and
merge the itempointers for the same key values.

Both rbtree and sorting require NlogN operations, and abbreviated keys
will not help here, because GIN is used for the case where there are
lots of repeated keys. The benefit of tuplesort is that it would be
better for huge data that does not fit into memory, but on the other
hand it would need twice as much free disk space for sorting as the
data itself took. Are we ready for such cost?

I think we have to experiment with both approaches, and see how it goes.

What are your thoughts?

Regards,

Constantin S. Pan
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Wed, Jan  6, 2016 at 12:29:14PM -0500, Robert Haas wrote:
> The point is that with the GRANT EXECUTE ON FUNCTION proposal, authors
> of monitoring tools enjoy various really noteworthy advantages.  They
> can have monitoring roles which have *exactly* the privileges that
> their tool needs, not whatever set of permissions (larger or smaller)
> the core project has decide the pg_monitor role should have.  They can
> have optional features requiring extra permissions and those extra
> permissions can be granted in precisely those shops where those extra
> features are in use.  They can deploy a new versions of their
> monitoring tool that requires an extra privilege on an existing
> PostgreSQL release without requiring any core modifications, which
> shaves years of time off the deployment schedule and avoids
> contentious arguments with the lovable folks who populate this mailing
> list.  That sounds *terrific* to me compared to the alternative you
> are proposing.

I assume backup tools would either document the functions they want
access to via SQL commands, or supply a script.  I assume they would
create a non-login role (group) with the desired permissions, and then
have users inherit from that.  They would also need to be able to allow
upgrades where they would (conditionally?) add the role and then
add/revoke permissions as needed, e.g. they might not need all
permissions they needed in a previous release, or they might need new
ones.

That all seems very straight-forward to me.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ToDo list update for BRIN indexes

2016-01-17 Thread Bruce Momjian
On Mon, Dec 21, 2015 at 07:54:43AM -0500, Robert Haas wrote:
> On Thu, Jul 9, 2015 at 4:49 PM, Jeff Janes  wrote:
> > Is there anything in the below section which has been been implemented or
> > rendered irrelevant by BRIN indexes?
> >
> > https://wiki.postgresql.org/wiki/Todo#Indexes
> >
> > "Consider smaller indexes that record a range of values per heap page,
> > rather than having one index entry for every heap row"
> 
> [ catching up on old threads ]
> 
> BRIN is exactly this, isn't it?  Well, moreso: it's a range of values
> for a range of heap pages.

FYI, that TODO entry was removed since July 9, probably as part of my
9.5 cleanup.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Truncating/vacuuming relations on full tablespaces

2016-01-17 Thread Jeff Janes
On Fri, Jan 15, 2016 at 11:16 AM, Tom Lane  wrote:

> I believe that there would be ramifications for some of the index AMs
> too.  For example, if left to its own devices GIN would consider VACUUM
> to include flushing its pending-list pages, which more than likely will
> increase not reduce the total index size.  I'm not sure that it has
> any ability to omit that step; can it remove dead entries directly off
> the pending pages, or only from the main index?

It cannot vacuum the pending list directly.  That is why it is a bug
for the vacuum to short-cut out of the pending list cleanup step when
it finds someone else already cleaning it.  For correctness it has to
either clean it itself, or wait until the other process is done (or at
least, done up to the point where the tail was at the time the vacuum
started).

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: Trigonometric functions in degrees

2016-01-17 Thread Dean Rasheed
On 30 November 2015 at 14:11, Tom Lane  wrote:
> FWIW, I think that probably the best course of action is to go ahead
> and install POSIX-compliant error checking in the existing functions
> too.  POSIX:2008 is quite clear about this:
>
> : An application wishing to check for error situations should set errno to
> : zero and call feclearexcept(FE_ALL_EXCEPT) before calling these
> : functions. On return, if errno is non-zero or fetestexcept(FE_INVALID |
> : FE_DIVBYZERO | FE_OVERFLOW | FE_UNDERFLOW) is non-zero, an error has
> : occurred.
>

Looking at this again, I think it makes sense to make the Inf/NaN
handling of these functions platform-independent. POSIX.1-2008 is
pretty explicit about how they ought to behave, which is different
from the current behaviour on either Linux or Windows:

sin(Inf):
  POSIX: domain error
  Linux: NaN
  Windows: ERROR:  input is out of range

asin(Inf):
  POSIX: domain error
  Linux: ERROR:  input is out of range
  Windows: ERROR:  input is out of range

sin(NaN):
  POSIX: NaN
  Linux: NaN
  Windows: ERROR:  input is out of range

asin(NaN):
  POSIX: NaN
  Linux: NaN
  Windows: ERROR:  input is out of range

I tried using feclearexcept + fetestexcept as recommended by
POSIX.1-2008, and that does indeed make the above examples compliant
on my linux box. Unfortunately it introduces new errors -- asin starts
generating FE_UNDERFLOW errors for numbers that are really not that
small, such as asin(1e-9), although the function still returns the
correct answer. A bigger problem though is that these are C99
functions and so won't necessarily be available on all supported
platforms.

So I think that a simpler answer is to just to add explicit tests for
these inputs and avoid relying on errno, at least for the inverse
functions, which have pretty clear constraints on their allowed
inputs. For the forward functions, I'm not sure if there are some
platforms on which large but finite inputs might generate errors, so I
think it's safest to keep the existing errno checks as well just in
case.

Attached are patches for this and the new functions in degrees, now
with POSIX compatible Inf/NaN handling.

Regards,
Dean
diff --git a/src/backend/utils/adt/float.c b/src/backend/utils/adt/float.c
new file mode 100644
index 8f34209..4ce0129
*** a/src/backend/utils/adt/float.c
--- b/src/backend/utils/adt/float.c
*** dacos(PG_FUNCTION_ARGS)
*** 1524,1541 
  	float8		arg1 = PG_GETARG_FLOAT8(0);
  	float8		result;
  
  	/*
! 	 * We use errno here because the trigonometric functions are cyclic and
! 	 * hard to check for underflow.
  	 */
! 	errno = 0;
! 	result = acos(arg1);
! 	if (errno != 0)
  		ereport(ERROR,
  (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
   errmsg("input is out of range")));
  
! 	CHECKFLOATVAL(result, isinf(arg1), true);
  	PG_RETURN_FLOAT8(result);
  }
  
--- 1524,1546 
  	float8		arg1 = PG_GETARG_FLOAT8(0);
  	float8		result;
  
+ 	/* Per the POSIX spec, return NaN if the input is NaN */
+ 	if (isnan(arg1))
+ 		PG_RETURN_FLOAT8(get_float8_nan());
+ 
  	/*
! 	 * The principal branch of the inverse cosine function maps values in the
! 	 * range [-1, 1] to values in the range [0, Pi], so we should reject any
! 	 * inputs outside that range and the result will always be finite.
  	 */
! 	if (arg1 < -1.0 || arg1 > 1.0)
  		ereport(ERROR,
  (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
   errmsg("input is out of range")));
  
! 	result = acos(arg1);
! 
! 	CHECKFLOATVAL(result, false, true);
  	PG_RETURN_FLOAT8(result);
  }
  
*** dasin(PG_FUNCTION_ARGS)
*** 1549,1562 
  	float8		arg1 = PG_GETARG_FLOAT8(0);
  	float8		result;
  
! 	errno = 0;
! 	result = asin(arg1);
! 	if (errno != 0)
  		ereport(ERROR,
  (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
   errmsg("input is out of range")));
  
! 	CHECKFLOATVAL(result, isinf(arg1), true);
  	PG_RETURN_FLOAT8(result);
  }
  
--- 1554,1576 
  	float8		arg1 = PG_GETARG_FLOAT8(0);
  	float8		result;
  
! 	/* Per the POSIX spec, return NaN if the input is NaN */
! 	if (isnan(arg1))
! 		PG_RETURN_FLOAT8(get_float8_nan());
! 
! 	/*
! 	 * The principal branch of the inverse sine function maps values in the
! 	 * range [-1, 1] to values in the range [-Pi/2, Pi/2], so we should reject
! 	 * any inputs outside that range and the result will always be finite.
! 	 */
! 	if (arg1 < -1.0 || arg1 > 1.0)
  		ereport(ERROR,
  (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
   errmsg("input is out of range")));
  
! 	result = asin(arg1);
! 
! 	CHECKFLOATVAL(result, false, true);
  	PG_RETURN_FLOAT8(result);
  }
  
*** datan(PG_FUNCTION_ARGS)
*** 1570,1583 
  	float8		arg1 = PG_GETARG_FLOAT8(0);
  	float8		result;
  
! 	errno = 0;
  	result = atan(arg1);
- 	if (errno != 0)
- 		ereport(ERROR,
- (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),
-  errmsg("input is out of range")));
  
! 	CHECKFLOATVAL(result, isinf(arg1), true);
  	PG_RETURN_FLOAT8(result);
  }
  
--- 15

Re: [HACKERS] Proposal: speeding up GIN build with parallel workers

2016-01-17 Thread Jeff Janes
On Fri, Jan 15, 2016 at 3:29 PM, Peter Geoghegan  wrote:
> On Fri, Jan 15, 2016 at 2:38 PM, Constantin S. Pan  wrote:
>> I have a draft implementation which divides the whole process between
>> N parallel workers, see the patch attached. Instead of a full scan of
>> the relation, I give each worker a range of blocks to read.
>
> I am currently working on a patch that allows B-Tree index builds to
> be performed in parallel. I think I'm a week or two away from posting
> it.
>
> Even without parallelism, wouldn't it be better if GIN indexes were
> built using tuplesort? I know way way less about the gin am than the
> nbtree am, but I imagine that a prominent cost for GIN index builds is
> constructing the main B-Tree (the one that's constructed over key
> values) itself. Couldn't tuplesort.c be adapted to cover this case?
> That would be much faster in general, particularly with the recent
> addition of abbreviated keys, while also leaving a clear path forward
> to performing the build in parallel.

I think it would take a lot of changes to tuple sort to make this be a
almost-always win.

In the general case each GIN key occurs in many tuples, and the
in-memory rbtree is good at compressing the tid list for each key to
maximize the amount of data that can be in memory (although perhaps it
could be even better, as it doesn't use varbyte encoding on the tid
list the way the posting lists on disk do--it could do so in the bulk
build case, where it receives the tid in order, but not feasibly in
the pending-list clean-up case, where it doesn't get them in order)

When I was testing building an index on a column of text identifiers
where there were a couple million identifiers occurring a few dozen
times each, building it as a gin index (using btree_gin so that plain
text columns could be indexed) was quite a bit faster than building it
as a regular btree index using tuple sort.  I didn't really
investigate this difference, but I assume it was due to the better
memory usage leading to less IO.

(Disclaimer: this was on those identifiers which had little difference
in the first 8 bytes, so they didn't really benefit from the
abbreviated keys.)

So in addition to shimming (key, tid) pairs to look like something
that can be fed to tuple sort, I think we would also need to make
tuple sort do on-the fly aggregation when it finds adjacent equal sort
columns, so it can make (key,tid[]) structures rather than (key,tid).
Without that, I think using tuple sort would be a loss at least as
often as it would be a win.

But I do think this with aggregation would be worth it despite the
amount of work involved.  In addition to automatically benefiting from
parallel sorts and any other future sort improvements, I think it
would also generate better indexes.  I have a theory that a problem
with very large gin indexes is that repeatedly building
maintenance_work_mem worth of rbtree entries and then merging them to
disk yields highly fragmented btrees, in which logical adjacent
key-space does not occupy physically nearby leaf pages, which then can
causes problems either with access that follows a pattern (like range
scans, except gin indexes can't really do those anyway) or further
bulk operations.

So I agree with that a better long term approach would probably be to
make gin index builds (at least the bulk ones, I don't know about the
pending-list-cleanup) to use a tuple sort.  But it looks like
Constantin has already done most of the work on his current proposal,
and no one has volunteered to do the work on converting it to tuple
sort instead.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Cannot find a working 64-bit integer type

2016-01-17 Thread Igal @ Lucee.org

Hi,

I'm trying to build Postgres with GCC 5.3.0 on Windows (a-la MinGW-64) 
and when I ran "configure" I received the following error:

"configure: error: Cannot find a working 64-bit integer type."

The config.log file can be seen at
https://gist.github.com/TwentyOneSolutions/8c225f66b9c0d4434871#file-config-201601171200-log-L21971

A google search finds this thread from 4 years ago, the IIUC explains 
that the issue is with newer GCC versions.


Any ideas on how I can overcome this issue?

Thanks!

--

Igal Sapir
Lucee Core Developer
Lucee.org 



[HACKERS] Do we need SQL-level access to amoptions functions?

2016-01-17 Thread Tom Lane
There is some text in indexam.sgml currently that says

   [ index AMs' amoptions functions ] should be correctly
   declared as taking text[] and bool and returning
   bytea.  This provision allows client code to execute
   amoptions to test validity of options settings.

In the pending AM interface rewrite, amoptions has been changed to a C API
which means it's not accessible from SQL.  So if there's anything out
there that is actually relying on being able to call, eg, ginoptions()
from SQL, it's gonna be broken.

Does anyone know of client code that's actually doing this?

If we do want to continue supporting this type of thing, we could invent
a new function, say validate_amoptions(amname text, options text[])
that would look up and call the correct amoptions function.  This would be
much easier to use from SQL anyway; right now you could only do it, for an
arbitrary AM, with a series of two queries with the function name obtained
from the first query being inserted into the second one.  Ugh.  (The lack
of complaints about this suggests nobody is doing it.)

I'm not planning to hold off committing the interface rewrite because of
this, but if anyone wants to preserve this functionality, they should
think about writing such a function in time for 9.6.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Proposal: speeding up GIN build with parallel workers

2016-01-17 Thread Peter Geoghegan
On Sun, Jan 17, 2016 at 12:03 PM, Jeff Janes  wrote:
> I think it would take a lot of changes to tuple sort to make this be a
> almost-always win.
>
> In the general case each GIN key occurs in many tuples, and the
> in-memory rbtree is good at compressing the tid list for each key to
> maximize the amount of data that can be in memory (although perhaps it
> could be even better, as it doesn't use varbyte encoding on the tid
> list the way the posting lists on disk do--it could do so in the bulk
> build case, where it receives the tid in order, but not feasibly in
> the pending-list clean-up case, where it doesn't get them in order)

Interesting analysis.

> When I was testing building an index on a column of text identifiers
> where there were a couple million identifiers occurring a few dozen
> times each, building it as a gin index (using btree_gin so that plain
> text columns could be indexed) was quite a bit faster than building it
> as a regular btree index using tuple sort.  I didn't really
> investigate this difference, but I assume it was due to the better
> memory usage leading to less IO.
>
> (Disclaimer: this was on those identifiers which had little difference
> in the first 8 bytes, so they didn't really benefit from the
> abbreviated keys.)

Sorry for going on about it, but I think you'd be surprised how
effective abbreviated keys are even in seemingly marginal cases.

> So I agree with that a better long term approach would probably be to
> make gin index builds (at least the bulk ones, I don't know about the
> pending-list-cleanup) to use a tuple sort.  But it looks like
> Constantin has already done most of the work on his current proposal,
> and no one has volunteered to do the work on converting it to tuple
> sort instead.

I'm not going to stand in the way of incremental progress,
particularly given that this looks to be a simple patch that doesn't
commit us to anything. I suspect that we should consolidate GIN and
nbtree at some point, though. I think that there are some useful
general consequences for performance that come from consolidation. For
example, my ongoing work on external sorting makes it use much of the
same infrastructure as internal sorting. Now, external sorts
automatically benefit from optimizations to internal sorting, like the
"onlyKey" quicksort optimization.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS]WIP: Covering + unique indexes.

2016-01-17 Thread David Rowley
On 14 January 2016 at 08:24, David Rowley 
wrote:

> I will try to review the omit_opclass_4.0.patch soon.
>

Hi, as promised, here's my review of the omit_opclass_4.0.patch patch.

The following comment needs to be updated:

 * indexkeys[], indexcollations[], opfamily[], and opcintype[]
 * each have ncolumns entries.

I think you'll need to do quite a bit of refactoring in this comment to
explain how it all works now, and which arrays we expect to be which length.

The omit_opclass_4.0.patch patch should remove the following comment which
you added in the other patch:

/* TODO
* All these arrays below still have length = ncolumns.
* Fix, when optional opclass functionality will be added.
*
* Generally, any column could be returned by IndexOnlyScan.
* Even if it doesn't have opclass for that type of index.
*
* For example,
* we have an index "create index on tbl(c1) including c2".
* If there's no suitable oplass on c2
* query "select c2 from tbl where c2 < 10" can't use index-only scan
* and query "select c2 from tbl where c1 < 10" can.
* But now it doesn't because of requirement that
* each indexed column must have an opclass.
*/

The following comment should be updated to mention that this is only the
case for
key attributes, and we just take the type from the index for including
attributes.
Perhaps the comment is better outside of the if (i < nkeyatts) block too,
and just
explain both at once.

/*
* The provided data is not necessarily of the type stored in the
* index; rather it is of the index opclass's input type. So look
* at rd_opcintype not the index tupdesc.
*
* Note: this is a bit shaky for opclasses that have pseudotype
* input types such as ANYARRAY or RECORD.  Currently, the
* typoutput functions associated with the pseudotypes will work
* okay, but we might have to try harder in future.
*/

In BuildIndexInfo() numKeys is a bit confusing. Perhaps that needs renamed
to numAtts?
Also this makes me think that the name ii_KeyAttrNumbers is now
out-of-date, as it contains
the including columns too by the looks of it. Maybe it just needs to drop
the "Key" and become
"ii_AttrNumbers". It would be interesting to hear what others think of that.

IndexInfo *
BuildIndexInfo(Relation index)
{
IndexInfo  *ii = makeNode(IndexInfo);
Form_pg_index indexStruct = index->rd_index;
int i;
int numKeys;

/* check the number of keys, and copy attr numbers into the IndexInfo */
numKeys = indexStruct->indnatts;
if (numKeys < 1 || numKeys > INDEX_MAX_KEYS)
elog(ERROR, "invalid indnatts %d for index %u",
numKeys, RelationGetRelid(index));
ii->ii_NumIndexAttrs = numKeys;
ii->ii_NumIndexKeyAttrs = indexStruct->indnkeyatts;
Assert(ii->ii_NumIndexKeyAttrs != 0);
Assert(ii->ii_NumIndexKeyAttrs <= ii->ii_NumIndexAttrs);


Here you've pushed a chunk of code over one tab, but you don't have to do
that. Just add:

+ if (i >= indexInfo->ii_NumIndexKeyAttrs)
+ continue;

This'll make the patch a bit smaller. Also, maybe it's time to get rid of
you debug stuff that you've commented out?

for (i = 0; i < numKeys; i++)
ii->ii_KeyAttrNumbers[i] = indexStruct->indkey.values[i];
- if (OidIsValid(keyType) && keyType != to->atttypid)
+ if (i < indexInfo->ii_NumIndexKeyAttrs)
  {
- /* index value and heap value have different types */
- tuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(keyType));
+ /*
+ * Check the opclass and index AM to see if either provides a keytype


Same for this part:

- /*
- * Identify the exclusion operator, if any.
- */
- if (nextExclOp)
+ if (attn < nkeycols)

Could become:

+ if (attn >= nkeycols)
+ continue;


I'm also wondering if indexkeys is still a good name for the IndexOptInfo
struct member.
Including columns are not really keys, but I feel renaming that might cause
a fair bit of code churn, so I'd be interested to hear what other's have to
say.

  info->indexkeys = (int *) palloc(sizeof(int) * ncolumns);
- info->indexcollations = (Oid *) palloc(sizeof(Oid) * ncolumns);
- info->opfamily = (Oid *) palloc(sizeof(Oid) * ncolumns);
- info->opcintype = (Oid *) palloc(sizeof(Oid) * ncolumns);
+ info->indexcollations = (Oid *) palloc(sizeof(Oid) * nkeycolumns);
+ info->opfamily = (Oid *) palloc(sizeof(Oid) * nkeycolumns);
+ info->opcintype = (Oid *) palloc(sizeof(Oid) * nkeycolumns);


In quite a few places you do: int natts, nkeyatts;
but the areas you've done this don't seem to ever declare multiple
variables per type. Maybe it's best to follow what's there and just write
"int" again on the next line.

If you submit an updated patch I can start looking over the change fairly
soon.

Many thanks

David

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] dblink: add polymorphic functions.

2016-01-17 Thread Joe Conway
On 01/08/2016 07:31 AM, Tom Lane wrote:
> Alvaro Herrera  writes:
>> So, is this going anywhere?
> 
> Oh, sorry, was I on the hook to review that?
> 
> [ quick look... ]  This doesn't seem like it responds to my criticism
> above.  I think what we want is that for every LookupTypeName call site
> that could potentially be invoking this notation, we must actually make
> provision for passing a valid pstate, one containing in particular the
> source text for the nodetree we're parsing.  Without that we will not
> get error messages of the quality we expect (with error pointers).
> 
> Another issue now that I look at it is that parser-detected semantic
> problems in the expression will result in ereport(ERROR), rather than
> returning NULL which is what you'd kind of expect from the API spec for
> LookupTypeName.  That's probably all right considering that many other
> syntactic issues throw errors inside this function; but maybe we'd better
> adjust the API spec.

Ok, back to the drawing board. Thanks for the feedback.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-17 Thread Joe Conway
On 01/16/2016 06:02 AM, Michael Paquier wrote:
> On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
>> 1) Change NextXID output format from "%u/%u" to "%u:%u"
>>(see recent hackers thread)
> 
> ! printf(_("Latest checkpoint's NextXID:  %u/%u\n"),
>  ControlFile.checkPointCopy.nextXidEpoch,
>  ControlFile.checkPointCopy.nextXid);
>   printf(_("Latest checkpoint's NextOID:  %u\n"),
> --- 646,652 
>  ControlFile.checkPointCopy.ThisTimeLineID);
>   printf(_("Latest checkpoint's full_page_writes: %s\n"),
>  ControlFile.checkPointCopy.fullPageWrites ? _("on") : _("off"));
> ! printf(_("Latest checkpoint's NextXID:  %u:%u\n"),
> This should be definitely a separate patch.

Ok. Notwithstanding Simon's reply, there seems to be consensus that this
is the way to go. Will commit it this way unless some additional
objections surface in the next day or so.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-17 Thread Joe Conway
On 01/16/2016 06:07 AM, Michael Paquier wrote:
> On Sun, Dec 27, 2015 at 5:39 AM, Joe Conway  wrote:
>> First installment -- pg_config function/view as a separate patch,
>> rebased to current master.
> 
> Documentation would be good to have.

I'm definitely happy to write the docs, but earlier it was not clear
that there was enough support for this patch at all, and I don't want to
waste cycles writing docs for a feature that ultimately does not get
committed. What's the current feel for whether this feature in general
is a good idea or bad?

> ! # don't include subdirectory-path-dependent -I and -L switches
> ! STD_CPPFLAGS := $(filter-out -I$(top_srcdir)/src/include
> -I$(top_builddir)/src/include,$(CPPFLAGS))
> ! STD_LDFLAGS := $(filter-out -L$(top_builddir)/src/port,$(LDFLAGS))
> ! override CPPFLAGS += -DVAL_CONFIGURE="\"$(configure_args)\""
> ! override CPPFLAGS += -DVAL_CC="\"$(CC)\""
> ! override CPPFLAGS += -DVAL_CPPFLAGS="\"$(STD_CPPFLAGS)\""
> ! override CPPFLAGS += -DVAL_CFLAGS="\"$(CFLAGS)\""
> ! override CPPFLAGS += -DVAL_CFLAGS_SL="\"$(CFLAGS_SL)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS="\"$(STD_LDFLAGS)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS_EX="\"$(LDFLAGS_EX)\""
> ! override CPPFLAGS += -DVAL_LDFLAGS_SL="\"$(LDFLAGS_SL)\""
> ! override CPPFLAGS += -DVAL_LIBS="\"$(LIBS)\""
> This duplication from src/bin/pg_config is a bad idea. Couldn't we do
> something in src/common instead that sets up values at compilation
> time in a routine (perhaps set of routines) available for both the
> frontend and backend?

Will take a look at it.

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] Cannot find a working 64-bit integer type

2016-01-17 Thread Igal @ Lucee.org
UPDATE: when I ran: configure --without-zlib --enable-debug 
CFLAGS="-Wno-cpp"


I did not get an error from configure (though I get an error from "make" 
but that's another issue)


I'm not sure what I'm "losing" by passing the "no-cpp" compiler flag?

also, the thread I mentioned in the previous email can be found at 
http://postgresql.nabble.com/Setting-Werror-in-CFLAGS-td5118384.html


Igal Sapir
Lucee Core Developer
Lucee.org 

On 1/17/2016 12:07 PM, Igal @ Lucee.org wrote:

Hi,

I'm trying to build Postgres with GCC 5.3.0 on Windows (a-la MinGW-64) 
and when I ran "configure" I received the following error:

"configure: error: Cannot find a working 64-bit integer type."

The config.log file can be seen at
https://gist.github.com/TwentyOneSolutions/8c225f66b9c0d4434871#file-config-201601171200-log-L21971

A google search finds this thread from 4 years ago, the IIUC explains 
that the issue is with newer GCC versions.


Any ideas on how I can overcome this issue?

Thanks!

--

Igal Sapir
Lucee Core Developer
Lucee.org 





[HACKERS] make error - libpqdll.def No such file or directory

2016-01-17 Thread Igal @ Lucee.org

When running make I encounter the following error:

gcc.exe: error: libpqdll.def: No such file or directory
/home/Admin/sources/postgresql-9.5.0/src/Makefile.shlib:393: recipe for 
target 'libpq.dll' failed

make[3]: *** [libpq.dll] Error 1
make[3]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq'

Makefile:17: recipe for target 'all-libpq-recurse' failed
make[2]: *** [all-libpq-recurse] Error 2
make[2]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces'

Makefile:34: recipe for target 'all-interfaces-recurse' failed
make[1]: *** [all-interfaces-recurse] Error 2
make[1]: Leaving directory '/home/Admin/builds/postgresql-9.5.0/src'
GNUmakefile:11: recipe for target 'all-src-recurse' failed
make: *** [all-src-recurse] Error 2

But /home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq does 
contain the file libpqdll.def


Any ideas?

--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Wed, Jan  6, 2016 at 12:29:14PM -0500, Robert Haas wrote:
> > The point is that with the GRANT EXECUTE ON FUNCTION proposal, authors
> > of monitoring tools enjoy various really noteworthy advantages.  They
> > can have monitoring roles which have *exactly* the privileges that
> > their tool needs, not whatever set of permissions (larger or smaller)
> > the core project has decide the pg_monitor role should have.  They can
> > have optional features requiring extra permissions and those extra
> > permissions can be granted in precisely those shops where those extra
> > features are in use.  They can deploy a new versions of their
> > monitoring tool that requires an extra privilege on an existing
> > PostgreSQL release without requiring any core modifications, which
> > shaves years of time off the deployment schedule and avoids
> > contentious arguments with the lovable folks who populate this mailing
> > list.  That sounds *terrific* to me compared to the alternative you
> > are proposing.
> 
> I assume backup tools would either document the functions they want
> access to via SQL commands, or supply a script.  I assume they would
> create a non-login role (group) with the desired permissions, and then
> have users inherit from that.  They would also need to be able to allow
> upgrades where they would (conditionally?) add the role and then
> add/revoke permissions as needed, e.g. they might not need all
> permissions they needed in a previous release, or they might need new
> ones.
> 
> That all seems very straight-forward to me.

I'm not against that idea, though I continue to feel that there are
common sets of privileges which backup tools could leverage.

The other issue that I'm running into, again, while considering how to
move back to ACL-based permissions for these objects is that we can't
grant out the actual permissions which currently exist.  That means we
either need to break backwards compatibility, which would be pretty
ugly, in my view, or come up with new functions and then users will have
to know which functions to use when.

As I don't think we really want to break backwards compatibility or
remove existing functionality, the only approach which is going to make
sense is to add additional functions in some cases.  In particular, we
will need alternate versions of pg_terminate_backend and
pg_cancel_backend.  One thought I had was to make that
'pg_signal_backend', but that sounds like we'd allow any signal sent by
a user with that right, which seems a bit much to me...

Perhaps that's what I'll do though, barring other suggestions.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-17 Thread Joe Conway
On 01/16/2016 06:02 AM, Michael Paquier wrote:
> On Wed, Dec 30, 2015 at 9:08 AM, Joe Conway  wrote:
>> 3) Adds new functions, more or less in line with previous discussions:
>>* pg_checkpoint_state()
>>* pg_controldata_state()
>>* pg_recovery_state()
>>* pg_init_state()
> 
> Taking the opposite direction of Josh upthread, why is this split
> actually necessary? Isn't the idea to provide a SQL interface of what
> pg_controldata shows? If this split proves to be useful, shouldn't we
> do it as well for pg_controldata?


The last discussion moved strongly in the direction of separate
functions, and that being different from pg_controldata was not a bad
thing. That said, I'm still of the opinion that there are legitimate
reasons to want the command line pg_controldata and the SQL functions to
produce equivalent, if not identical, results. I just wish we could get
a clear consensus one way or the other.


>> ===
>> Missing (TODO once agreement on the above is reached):
>> ---
>> a) documentation
> 
> This would be good to have.

Sure, once we have settled on a direction.

>> b) catversion bump
> 
> That's committer work.

I know, but since I will be the committer if this thing ever gets that
far, I wanted to point out that I had not forgotten that little detail ;-)

>> c) regression tests
> 
> Hm, what would be the value of those tests? I think we could live
> without for simple functions like that honestly.

Works for me.

> I think that those functions should be superuser-only. They provide
> information about the system globally.

The tail of this thread seems to be headed away from this direction.
I'll take another look and propose something concrete.

Thanks for the comments!

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] WIP: Rework access method interface

2016-01-17 Thread Tom Lane
Alexander Korotkov  writes:
> [ aminterface-13.patch ]

I've committed this after some rather significant rework, not all of
it cosmetic in nature.  For example, the patch fell over under
CLOBBER_CACHE_ALWAYS (planner failing to copy data out of relcache
entries that might not survive long) and on 32-bit machines (failure
to fix index_getbitmap API properly).  I might've missed some other
portability issues; we'll see what the buildfarm says.

The amvalidate functions could do with quite a bit more work in the
long run.  For now, they more or less replace the queries we had to
remove from opr_sanity, but I'd like to see almost all of what
opr_sanity does with opclasses get pushed down, so that these functions
can provide a test facility for extensions' opclasses.  That does not
need to hold up adoption of the patch, though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 01:57:22PM -0500, Stephen Frost wrote:
> Right, we also check in the backend on startup for certain permissions.
> I don't recall offhand if that's forced to 700 or if we allow 750.
> 
> > > I don't recall offhand if that means we'd have to make changes to allow
> > > that, but, for my 2c, I don't see why we wouldn't allow it to be an
> > > option.
> > 
> > OK, that would be an initdb change then.
> 
> It would need to be optional, so distributions and users could choose
> which makes sense for their systems.

While the group owner of the directory is a distributions question, the
permissions are usually a backup-method-specific requirement.  I can see
us creating an SQL function that opens up group permissions on the data
directory for specific backup tools that need it, then granting
permissions on that function to the backup role.   This is another
example where different backup tools need different permissions.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-17 Thread Michael Paquier
On Sun, Jan 17, 2016 at 3:10 AM, Fabien COELHO  wrote:
>> With this example:
>> \set cid debug(sqrt(-1))
>> I get that:
>> debug(script=0,command=1): double nan
>> An error would be more logical, no?
>
>
> If "sqrt(-1)" as a double is Nan for the computer, I'm fine with that. It
> makes the code simpler to just let the math library do its stuff and not
> bother.

Hm. OK..

>> The basic operator functions also do not check for integer overflows.
>
>
> This is a feature. I think that they should not check for overflow, as in C,
> this is just int64_t arithmetic "as is".

(int64_t is not an available type on Windows btw.)

> Moreover, it would be a new feature to add such a check if desirable, so it
> would belong to another patch, it is not related to adding functions.
> The addition already overflows in the current code.
>
> Finally I can think of good reason to use overflows deliberately, so I think
> it would argue against such a change.

Could you show up an example? I am curious about that.

>> Those three ones are just overflowing:
>> \set cid debug(9223372036854775807 + 1)
>> \set cid debug(-9223372036854775808 - 1)
>> \set cid debug(9223372036854775807 * 9223372036854775807)
>> debug(script=0,command=1): int -9223372036854775807
>> debug(script=0,command=2): int 9223372036854775807
>> debug(script=0,command=3): int 1
> All these results are fine from my point of view.

On HEAD we are getting similar strange results, so I am fine to
withdraw but that's still very strange to me. The first case is
generating -9223372036854775808, the second one compiles
9223372036854775807 and the third one generates 1. Or we make the
error checks even more consistent in back-branches, perhaps that 's
indeed not worth it for a client though.

>> And this one generates a core dump:
>> \set cid debug(-9223372036854775808 / -1)
>> Floating point exception: 8 (core dumped)
>
> This one is funny, but it is a fact of int64_t life: you cannot divide
> INT64_MIN by -1 because the result cannot be represented as an int64_t.
> This is propably hardcoded in the processor. I do not think it is worth
> doing anything about it for pgbench.

This one, on the contrary, is generating an error on HEAD, and your
patch is causing a crash:
value "9223372036854775808" is out of range for type bigint
That's a regression, no? I am uncomfortable with the fact of letting
such holes in the code, even if that's a client application.

>> A more general comment: what about splitting all the execution
>> functions into a separate file exprexec.c? evaluateExpr (renamed as
>> execExpr) is the root function, but then we have a set of static
>> sub-functions for each node, like execExprFunc, execExprVar,
>> execExprConst, etc?
>
> I do not see a strong case for renaming. The function part could be split
> because of the indentation, though.

The split makes sense to me regarding the fact that we have function
parsing, execution and the main code paths in different files. That's
not far from the backend that does similar split actually.

>> This way we would save a bit of tab-indentation, this patch making the new
>> code lines becoming larger than 80 characters because of all the switch/case
>> stuff that gets more complicated.
>
> I agree that the code is pretty ugly, but this is partly due to postgres
> indentation rules for switch which are *NOT* reasonnable, IMO.
>
> I put the function evaluation in a function in the attached version.

Thanks, this makes the code a bit clearer.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 06:58:25PM -0500, Stephen Frost wrote:
> I'm not against that idea, though I continue to feel that there are
> common sets of privileges which backup tools could leverage.
> 
> The other issue that I'm running into, again, while considering how to
> move back to ACL-based permissions for these objects is that we can't
> grant out the actual permissions which currently exist.  That means we

Is that because many of them are complex, e.g. you can kill only your
own sessions?

> either need to break backwards compatibility, which would be pretty
> ugly, in my view, or come up with new functions and then users will have
> to know which functions to use when.
> 
> As I don't think we really want to break backwards compatibility or
> remove existing functionality, the only approach which is going to make
> sense is to add additional functions in some cases.  In particular, we
> will need alternate versions of pg_terminate_backend and
> pg_cancel_backend.  One thought I had was to make that

Like these?  Could we define own-user-type rights?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-01-17 Thread Michael Paquier
On Sun, Jan 17, 2016 at 8:48 AM, Andres Freund  wrote:
> On January 17, 2016 12:46:36 AM GMT+01:00, Michael Paquier 
>  wrote:
> , but we surely do not want to give away
>>checkpoint and recovery information.
>
> Why is that? A lot of that information is available anyway?

We are trying to hide away from non-superusers WAL-related information
in system views and system function, that's my point to do the same
here. For the data of pg_control, it seems to me that this can give
away to any authorized users hints regarding the way Postgres is
built, perhaps letting people know for example which Linux
distribution is used and which flavor of Postgres is used (we already
give away some information with version() but that's different than
the libraries this is linking to), so an attacker may be able to take
advantage of that to do attacks on potentially outdated packages? And
I would think that many users are actually going to revoke the access
of those functions to public if we are going to make them
world-visible. It is easier as well to restrict things first, and then
relax if necessary, than the opposite as well.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-17 Thread Haribabu Kommi
On Sat, Jan 16, 2016 at 12:00 PM, David Rowley
 wrote:
> On 16 January 2016 at 03:03, Robert Haas  wrote:
>>
>> On Tue, Dec 29, 2015 at 7:39 PM, David Rowley
>>  wrote:
>> >> No, the idea I had in mind was to allow it to continue to exist in the
>> >> expanded format until you really need it in the varlena format, and
>> >> then serialize it at that point.  You'd actually need to do the
>> >> opposite: if you get an input that is not in expanded format, expand
>> >> it.
>> >
>> > Admittedly I'm struggling to see how this can be done. I've spent a good
>> > bit
>> > of time analysing how the expanded object stuff works.
>> >
>> > Hypothetically let's say we can make it work like:
>> >
>> > 1. During partial aggregation (finalizeAggs = false), in
>> > finalize_aggregates(), where we'd normally call the final function,
>> > instead
>> > flatten INTERNAL states and store the flattened Datum instead of the
>> > pointer
>> > to the INTERNAL state.
>> > 2. During combining aggregation (combineStates = true) have all the
>> > combine
>> > functions written in such a ways that the INTERNAL states expand the
>> > flattened states before combining the aggregate states.
>> >
>> > Does that sound like what you had in mind?
>>
>> More or less.  But what I was really imagining is that we'd get rid of
>> the internal states and replace them with new datatypes built to
>> purpose.  So, for example, for string_agg(text, text) you could make a
>> new datatype that is basically a StringInfo.  In expanded form, it
>> really is a StringInfo.  When you flatten it, you just get the string.
>> When somebody expands it again, they again have a StringInfo.  So the
>> RW pointer to the expanded form supports append cheaply.
>
>
> That sounds fine in theory, but where and how do you suppose we determine
> which expand function to call? Nothing exists in the catalogs to decide this
> currently.

I am thinking of transition function returns and accepts the StringInfoData
instead of PolyNumAggState internal data for int8_avg_accum for example.

The StringInfoData is formed with the members of the PolyNumAggState
structure data. The input given StringInfoData is transformed into
PolyNumAggState data and finish the calculation and again form the
StringInfoData and return. Similar changes needs to be done for final
functions input type also. I am not sure whether this approach may have
some impact on performance?


Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
Bruce,

* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Jan 17, 2016 at 01:57:22PM -0500, Stephen Frost wrote:
> > Right, we also check in the backend on startup for certain permissions.
> > I don't recall offhand if that's forced to 700 or if we allow 750.
> > 
> > > > I don't recall offhand if that means we'd have to make changes to allow
> > > > that, but, for my 2c, I don't see why we wouldn't allow it to be an
> > > > option.
> > > 
> > > OK, that would be an initdb change then.
> > 
> > It would need to be optional, so distributions and users could choose
> > which makes sense for their systems.
> 
> While the group owner of the directory is a distributions question, the
> permissions are usually a backup-method-specific requirement.  I can see
> us creating an SQL function that opens up group permissions on the data
> directory for specific backup tools that need it, then granting
> permissions on that function to the backup role.   This is another
> example where different backup tools need different permissions.

I don't believe we can really consider group ownership and group
permissions independently.  They really go hand-in-hand.  On
RedHat-based system, where the group is set as 'staff', you probably
don't want group permissions to be allowed.  On Debian-based systems,
where there is a dedicated 'postgres' group, group permissions are fine
to allow.

Group ownership and permissions aren't a backup-method-specific
requirement either, in my view.  I'm happy to chat with Marco (who has
said he would be weighing in on this thread when he is able to)
regarding barman, and whomever would be appropriate for BART (perhaps
you could let me know..?), but if it's possible to do a backup without
being a superuser and with only read access to the data directory, I
would expect every backup soltuion to view that as a feature which they
want to support, as there are environments which will find it desirable,
at a minimum, and even some which will require it.

Lastly, I'm pretty sure this would have to be a postgresql.conf option
as we check the permissions on the data directory on startup.  I don't
see how having an SQL function would work there as I certainly wouldn't
want to have the permissions changing on a running system.  That strikes
me as being risky without any real benefit.  Either it's safe and
acceptable to allow those rights to the group, or it isn't.  A temproary
grant really isn't helping with anything that I can see, surely there
are numerous ways to exploit such a time-based grant.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Jan 17, 2016 at 06:58:25PM -0500, Stephen Frost wrote:
> > I'm not against that idea, though I continue to feel that there are
> > common sets of privileges which backup tools could leverage.
> > 
> > The other issue that I'm running into, again, while considering how to
> > move back to ACL-based permissions for these objects is that we can't
> > grant out the actual permissions which currently exist.  That means we
> 
> Is that because many of them are complex, e.g. you can kill only your
> own sessions?

Right.

> > either need to break backwards compatibility, which would be pretty
> > ugly, in my view, or come up with new functions and then users will have
> > to know which functions to use when.
> > 
> > As I don't think we really want to break backwards compatibility or
> > remove existing functionality, the only approach which is going to make
> > sense is to add additional functions in some cases.  In particular, we
> > will need alternate versions of pg_terminate_backend and
> > pg_cancel_backend.  One thought I had was to make that
> 
> Like these?  Could we define own-user-type rights?

Interesting idea but I don't really see that being general enough that
we would want to burn a GRANT bit for it...

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] pgsql: Further tweaking of print_aligned_vertical().

2016-01-17 Thread Bruce Momjian
On Mon, Jan  4, 2016 at 01:18:02PM -0500, Tom Lane wrote:
> Greg Stark  writes:
> > I used plenty of images I pulled off the net without regard for
> > copyright so I hesitated to put it up. I suppose that's par for the
> > course with these kinds of presentations. In any case it was just a
> > quick lightning talk I threw together to describe a few days of mostly
> > waiting around for builds to finish. Here are the two lightning talks
> > on Google:
> 
> > VAX (simh):
> > https://docs.google.com/presentation/d/1PG-bMU4WiS1pjtBwRvH4cE-nN9y5gj8ZLCO7KMrlvYg/view
> 
> > Fuzzer:
> > https://docs.google.com/presentation/d/12Dd9Bhcugkdi2w0ye4T1fy9ccjconJEz9cNthdeyH7k/view
> 
> Very cool, thanks!

No, you need to see the video!  It was amazing.  I couldn't believe what
I was hearing.  I was like "YOU DID WHAT!"  Unfortunately, I don't think
it was recorded.  :-(

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 09:10:23PM -0500, Stephen Frost wrote:
> > While the group owner of the directory is a distributions question, the
> > permissions are usually a backup-method-specific requirement.  I can see
> > us creating an SQL function that opens up group permissions on the data
> > directory for specific backup tools that need it, then granting
> > permissions on that function to the backup role.   This is another
> > example where different backup tools need different permissions.
> 
> I don't believe we can really consider group ownership and group
> permissions independently.  They really go hand-in-hand.  On
> RedHat-based system, where the group is set as 'staff', you probably
> don't want group permissions to be allowed.  On Debian-based systems,
> where there is a dedicated 'postgres' group, group permissions are fine
> to allow.

Yes, I can see that as problematic.  Seems it would have to be something
done by the administrator from the command-line.

> Group ownership and permissions aren't a backup-method-specific
> requirement either, in my view.  I'm happy to chat with Marco (who has
> said he would be weighing in on this thread when he is able to)
> regarding barman, and whomever would be appropriate for BART (perhaps
> you could let me know..?), but if it's possible to do a backup without
> being a superuser and with only read access to the data directory, I
> would expect every backup soltuion to view that as a feature which they
> want to support, as there are environments which will find it desirable,
> at a minimum, and even some which will require it.

pg_dump doesn't need to read the PGDATA directory, and I thought this
permission was to be used by pg_dump users as well.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Jan 17, 2016 at 09:10:23PM -0500, Stephen Frost wrote:
> > > While the group owner of the directory is a distributions question, the
> > > permissions are usually a backup-method-specific requirement.  I can see
> > > us creating an SQL function that opens up group permissions on the data
> > > directory for specific backup tools that need it, then granting
> > > permissions on that function to the backup role.   This is another
> > > example where different backup tools need different permissions.
> > 
> > I don't believe we can really consider group ownership and group
> > permissions independently.  They really go hand-in-hand.  On
> > RedHat-based system, where the group is set as 'staff', you probably
> > don't want group permissions to be allowed.  On Debian-based systems,
> > where there is a dedicated 'postgres' group, group permissions are fine
> > to allow.
> 
> Yes, I can see that as problematic.  Seems it would have to be something
> done by the administrator from the command-line.

initdb on both RedHat and Debian-based systems is run, generally
speaking, from the packaging scripts.  They would be able to pass the
correct options to initdb (or PG itself, if we decide that's
necessary..).

> > Group ownership and permissions aren't a backup-method-specific
> > requirement either, in my view.  I'm happy to chat with Marco (who has
> > said he would be weighing in on this thread when he is able to)
> > regarding barman, and whomever would be appropriate for BART (perhaps
> > you could let me know..?), but if it's possible to do a backup without
> > being a superuser and with only read access to the data directory, I
> > would expect every backup soltuion to view that as a feature which they
> > want to support, as there are environments which will find it desirable,
> > at a minimum, and even some which will require it.
> 
> pg_dump doesn't need to read the PGDATA directory, and I thought this
> permission was to be used by pg_dump users as well.

No.  That has been a source of confusion, though I'm not quite sure how
or why, beyond the general assumption that anything 'backup' must
include 'pg_dump' (I don't generally consider that to be the case,
myself, but it seems others do...).

This is only for file-based backups.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Combining Aggregates

2016-01-17 Thread David Rowley
On 18 January 2016 at 14:36, Haribabu Kommi 
wrote:

> On Sat, Jan 16, 2016 at 12:00 PM, David Rowley
>  wrote:
> > On 16 January 2016 at 03:03, Robert Haas  wrote:
> >>
> >> On Tue, Dec 29, 2015 at 7:39 PM, David Rowley
> >>  wrote:
> >> >> No, the idea I had in mind was to allow it to continue to exist in
> the
> >> >> expanded format until you really need it in the varlena format, and
> >> >> then serialize it at that point.  You'd actually need to do the
> >> >> opposite: if you get an input that is not in expanded format, expand
> >> >> it.
> >> >
> >> > Admittedly I'm struggling to see how this can be done. I've spent a
> good
> >> > bit
> >> > of time analysing how the expanded object stuff works.
> >> >
> >> > Hypothetically let's say we can make it work like:
> >> >
> >> > 1. During partial aggregation (finalizeAggs = false), in
> >> > finalize_aggregates(), where we'd normally call the final function,
> >> > instead
> >> > flatten INTERNAL states and store the flattened Datum instead of the
> >> > pointer
> >> > to the INTERNAL state.
> >> > 2. During combining aggregation (combineStates = true) have all the
> >> > combine
> >> > functions written in such a ways that the INTERNAL states expand the
> >> > flattened states before combining the aggregate states.
> >> >
> >> > Does that sound like what you had in mind?
> >>
> >> More or less.  But what I was really imagining is that we'd get rid of
> >> the internal states and replace them with new datatypes built to
> >> purpose.  So, for example, for string_agg(text, text) you could make a
> >> new datatype that is basically a StringInfo.  In expanded form, it
> >> really is a StringInfo.  When you flatten it, you just get the string.
> >> When somebody expands it again, they again have a StringInfo.  So the
> >> RW pointer to the expanded form supports append cheaply.
> >
> >
> > That sounds fine in theory, but where and how do you suppose we determine
> > which expand function to call? Nothing exists in the catalogs to decide
> this
> > currently.
>
> I am thinking of transition function returns and accepts the StringInfoData
> instead of PolyNumAggState internal data for int8_avg_accum for example.
>

hmm, so wouldn't that mean that the transition function would need to (for
each input tuple):

1. Parse that StringInfo into tokens.
2. Create a new aggregate state object.
3. Populate the new aggregate state based on the tokenised StringInfo, this
would perhaps require that various *_in() functions are called on each
token.
4. Add the new tuple to the aggregate state.
5. Build a new StringInfo based on the aggregate state modified in 4.

?

Currently the transition function only does 4, and performs 2 only if it's
the first Tuple.

Is that what you mean? as I'd say that would slow things down significantly!

To get a gauge on how much more CPU work that would be for some aggregates,
have a look at how simple int8_avg_accum() is currently when we have
HAVE_INT128 defined. For the case of AVG(BIGINT) we just really have:

state->sumX += newval;
state->N++;

The above code is step 4 only. So unless I've misunderstood you, that would
need to turn into steps 1-5 above. Step 4 here is probably just a handful
of instructions right now, but adding code for steps 1,2,3 and 5 would turn
that into hundreds.

I've been trying to avoid any overhead by adding the serializeStates flag
to make_agg() so that we can maintain the same performance when we're just
passing internal states around in the same process. This keeps the
conversions between internal state and serialised state to a minimum.

The StringInfoData is formed with the members of the PolyNumAggState
> structure data. The input given StringInfoData is transformed into
> PolyNumAggState data and finish the calculation and again form the
> StringInfoData and return. Similar changes needs to be done for final
> functions input type also. I am not sure whether this approach may have
> some impact on performance?


-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Bruce Momjian
On Sun, Jan 17, 2016 at 09:23:14PM -0500, Stephen Frost wrote:
> > > Group ownership and permissions aren't a backup-method-specific
> > > requirement either, in my view.  I'm happy to chat with Marco (who has
> > > said he would be weighing in on this thread when he is able to)
> > > regarding barman, and whomever would be appropriate for BART (perhaps
> > > you could let me know..?), but if it's possible to do a backup without
> > > being a superuser and with only read access to the data directory, I
> > > would expect every backup soltuion to view that as a feature which they
> > > want to support, as there are environments which will find it desirable,
> > > at a minimum, and even some which will require it.
> > 
> > pg_dump doesn't need to read the PGDATA directory, and I thought this
> > permission was to be used by pg_dump users as well.
> 
> No.  That has been a source of confusion, though I'm not quite sure how
> or why, beyond the general assumption that anything 'backup' must
> include 'pg_dump' (I don't generally consider that to be the case,
> myself, but it seems others do...).

I think the source of that is that many people have asked for
backup-only uses, and I thought running pg_dump or pg_dumpall was one of
those cases.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional role attributes && superuser review

2016-01-17 Thread Stephen Frost
* Bruce Momjian (br...@momjian.us) wrote:
> On Sun, Jan 17, 2016 at 09:23:14PM -0500, Stephen Frost wrote:
> > > > Group ownership and permissions aren't a backup-method-specific
> > > > requirement either, in my view.  I'm happy to chat with Marco (who has
> > > > said he would be weighing in on this thread when he is able to)
> > > > regarding barman, and whomever would be appropriate for BART (perhaps
> > > > you could let me know..?), but if it's possible to do a backup without
> > > > being a superuser and with only read access to the data directory, I
> > > > would expect every backup soltuion to view that as a feature which they
> > > > want to support, as there are environments which will find it desirable,
> > > > at a minimum, and even some which will require it.
> > > 
> > > pg_dump doesn't need to read the PGDATA directory, and I thought this
> > > permission was to be used by pg_dump users as well.
> > 
> > No.  That has been a source of confusion, though I'm not quite sure how
> > or why, beyond the general assumption that anything 'backup' must
> > include 'pg_dump' (I don't generally consider that to be the case,
> > myself, but it seems others do...).
> 
> I think the source of that is that many people have asked for
> backup-only uses, and I thought running pg_dump or pg_dumpall was one of
> those cases.

I'd want that to be a seperate permission which would really be
something along the lines of "allowed to read everything through a DB
connection", which is what pg_dump/pg_dumpall need.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Optimizer questions

2016-01-17 Thread Bruce Momjian
On Tue, Jan  5, 2016 at 05:55:28PM +0300, konstantin knizhnik wrote:
> Hi hackers, 
> 
> I want to ask two questions about PostgreSQL optimizer.
> I have the following query:
> 
> SELECT o.id as id,s.id as sid,o.owner,o.creator,o.parent_id
> as dir_id,s.mime_id,m.c_type,s.p_file,s.mtime,o.ctime,o.name
> ,o.title,o.size,o.deleted,la.otime,la.etime,uo.login as owner_login,uc.login 
> as
> creator_login,(CASE WHEN f.user_id IS NULL THEN 0 ELSE 1 END) AS flagged,
> (select 'userid\\:'||string_agg(user_id,' userid\\:') from 
> get_authorized_users
> (o.id)) as acl FROM objects s JOIN objects o ON s.original_file=o.id LEFT JOIN
> flags f ON o.id=f.obj_id AND o.owner=f.user_id LEFT JOIN objects_last_activity
> la ON o.id = la.obj_id AND o.owner = la.user_id, mime m, users uc , users uo
> WHERE (s.mime_id=904 or s.mime_id=908) AND m.mime_id = o.mime_id AND o.owner =
> uo.user_id AND o.creator = uc.user_id ORDER BY s.mtime LIMIT 9;

FYI, I could not make any sense out of this query, and I frankly can't
figure out how others can udnerstand it.  :-O   Anyway, I ran it through
pgFormatter (https://github.com/darold/pgFormatter), which I am showing
here because I was impressed with the results:

SELECT
o.id AS id,
s.id AS sid,
o.owner,
o.creator,
o.parent_id AS dir_id,
s.mime_id,
m.c_type,
s.p_file,
s.mtime,
o.ctime,
o.name,
o.title,
o.size,
o.deleted,
la.otime,
la.etime,
uo.login AS owner_login,
uc.login AS creator_login,
(
CASE
WHEN f.user_id IS NULL THEN 0
ELSE 1
END ) AS flagged,
(
SELECT
'userid\\:' || string_agg (
user_id,
' userid\\:' )
FROM
get_authorized_users (
o.id ) ) AS acl
FROM
objects s
JOIN objects o ON s.original_file = o.id
LEFT JOIN flags f ON o.id = f.obj_id
AND o.owner = f.user_id
LEFT JOIN objects_last_activity la ON o.id = la.obj_id
AND o.owner = la.user_id,
mime m,
users uc,
users uo
WHERE (
s.mime_id = 904
OR s.mime_id = 908 )
AND m.mime_id = o.mime_id
AND o.owner = uo.user_id
AND o.creator = uc.user_id
ORDER BY
s.mtime
LIMIT 9;

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Combining Aggregates

2016-01-17 Thread Robert Haas
On Sun, Jan 17, 2016 at 9:26 PM, David Rowley
 wrote:
> hmm, so wouldn't that mean that the transition function would need to (for
> each input tuple):
>
> 1. Parse that StringInfo into tokens.
> 2. Create a new aggregate state object.
> 3. Populate the new aggregate state based on the tokenised StringInfo, this
> would perhaps require that various *_in() functions are called on each
> token.
> 4. Add the new tuple to the aggregate state.
> 5. Build a new StringInfo based on the aggregate state modified in 4.
>
> ?

I don't really know what you mean by parse the StringInfo into tokens.
The whole point of the expanded-object interface is to be able to keep
things in an expanded internal form so that you *don't* have to
repeatedly construct and deconstruct internal data structures.  I
worked up an example of this approach using string_agg(), which I
attach here.  This changes the transition type of string_agg() from
internal to text.  The same code would work for bytea_string_agg(),
which would allow removal of some other code, but this patch doesn't
do that, because the point of this is to elucidate the approach.

In my tests, this seems to be slightly slower than what we're doing
today; worst of all, it adds a handful of cycles to
advance_transition_function() even when the aggregate is not an
expanded object or, indeed, not even pass-by-reference.  Some of this
might be able to be fixed by a little massaging - in particular,
DatumIsReadWriteExpandedObject() seems like it could be partly or
entirely inlined, and maybe there's some other way to improve the
coding here.

Generally, I think finding a way to pass expanded objects through
nodeAgg.c would be a good thing to pursue, if we can make it work.
The immediate impetus for changing things this way would be that we
wouldn't need to add a mechanism for serializing and deserializing
internal functions just to pass around partial aggregates.  But
there's another advantage, too: right now,
advance_transition_function() does a lot of data copying to move data
from per-call context to the per-aggregate context.  When an expanded
object is in use, this can be skipped.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index f49114a..a2c29c4 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -756,11 +756,18 @@ advance_transition_function(AggState *aggstate,
 	aggstate->curpertrans = NULL;
 
 	/*
-	 * If pass-by-ref datatype, must copy the new value into aggcontext and
-	 * pfree the prior transValue.  But if transfn returned a pointer to its
-	 * first input, we don't need to do anything.
+	 * If we got a R/W pointer to an expanded object, we can just take over
+	 * control of the object.  Any other pass-by-ref value must be copied into
+	 * aggcontext and the prior value freed; with the exception that if transfn
+	 * returned a pointer to its first input, we don't need to do anything.
 	 */
-	if (!pertrans->transtypeByVal &&
+	if (DatumIsReadWriteExpandedObject(newVal, fcinfo->isnull,
+	   pertrans->transtypeLen))
+	{
+		newVal = TransferExpandedObject(newVal,
+		  aggstate->aggcontexts[aggstate->current_set]->ecxt_per_tuple_memory);
+	}
+	else if (!pertrans->transtypeByVal &&
 		DatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue))
 	{
 		if (!fcinfo->isnull)
diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile
index 2cb7bab..61c9359 100644
--- a/src/backend/utils/adt/Makefile
+++ b/src/backend/utils/adt/Makefile
@@ -12,7 +12,7 @@ include $(top_builddir)/src/Makefile.global
 OBJS = acl.o arrayfuncs.o array_expanded.o array_selfuncs.o \
 	array_typanalyze.o array_userfuncs.o arrayutils.o ascii.o \
 	bool.o cash.o char.o date.o datetime.o datum.o dbsize.o domains.o \
-	encode.o enum.o expandeddatum.o \
+	encode.o enum.o expandeddatum.o expandedstring.o \
 	float.o format_type.o formatting.o genfile.o \
 	geo_ops.o geo_selfuncs.o inet_cidr_ntop.o inet_net_pton.o int.o \
 	int8.o json.o jsonb.o jsonb_gin.o jsonb_op.o jsonb_util.o \
diff --git a/src/backend/utils/adt/expandedstring.c b/src/backend/utils/adt/expandedstring.c
new file mode 100644
index 000..4e279a3
--- /dev/null
+++ b/src/backend/utils/adt/expandedstring.c
@@ -0,0 +1,86 @@
+/*-
+ *
+ * expandedstring.c
+ *	  Expand a varlena into a StringInfo.
+ *
+ * Portions Copyright (c) 1996-2016, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/utils/adt/expandeddatum.c
+ *
+ *-
+ */
+#include "postgres.h"
+
+#include "fmgr.h"
+#include "utils/expandedstring.h"
+#include "utils/memutils.h"
+
+static Size ESI_get_flat_size(ExpandedObjectHeader *eohptr);
+static void ES

Re: [HACKERS] WIP: Rework access method interface

2016-01-17 Thread Robert Haas
On Sat, Jan 16, 2016 at 12:32 PM, Tom Lane  wrote:
> Then we're going to end up with option A; and I suspect that we'll never
> bother with factoring out any common code, because it won't be worth the
> notational trouble if it involves common code that's in a different file
> in a different directory.

Since when is sharing code across files in different directories even
an iota more difficult than sharing code across files in the same
directory?

Sharing code across multiple files is only slightly more difficult
than sharing it within a file.  You just have to create a header file
in the appropriate place and stick the necessary declarations in
there.  I'll admit that's some work, but it's not usually very much.
After that, where the header gets included from really makes no
difference.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Compiler warning in pg_am changes

2016-01-17 Thread David Rowley
Hi,

I've attached a small patch to fix new compiler warning which is new as of
65c5fcd3

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


amcostestimate_cast.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning in pg_am changes

2016-01-17 Thread Tom Lane
David Rowley  writes:
> I've attached a small patch to fix new compiler warning which is new as of
> 65c5fcd3

Pushed, thanks.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] extend pgbench expressions with functions

2016-01-17 Thread Michael Paquier
On Mon, Jan 18, 2016 at 10:07 AM, Michael Paquier
 wrote:
> On Sun, Jan 17, 2016 at 3:10 AM, Fabien COELHO  wrote:
>> I put the function evaluation in a function in the attached version.
>
> Thanks, this makes the code a bit clearer.

OK, so I had an extra look at this patch and I am marking it as ready
for committer. A couple of things to be aware of, and the result of
this thread with the patch in its current state:
- The patch is keeping \setrandom. Fabien and I are agreeing to purely
remove it, though it is kept in the patch because it is easier to
remove existing code rather than add it again per Fabien's concerns.
- INT64_MIN / -1 throws a core dump, and errors on HEAD. I think this
should be fixed, Fabien does not.
- There are not many overflow checks for the exiting int64 operators
and functions. HEAD doesn't do much, this patch makes it the situation
a bit worse even if there are a couple of checks for int() for
example. We do not do any checks for sqrt(-N) (N > 0) for example.
- It may be more interesting to have the function execution code into
a separate file for clarity. Not mandatory though.
Except those comments, all the other issues have been addressed. I
think this is a great patch, and greatly improves the extensibility of
pgbench.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting read on SCM upending software and hardware architecture

2016-01-17 Thread Bruce Momjian
On Thu, Jan  7, 2016 at 02:30:06PM -0600, Jim Nasby wrote:
> https://queue.acm.org/detail.cfm?id=2874238 discusses how modern
> Storage Class Memory (SCM), such as PCIe SSD and NVDIMMs are
> completely upending every assumption made about storage. To put this
> in perspective, you can now see storage latency that is practically
> on-par with things like lock acquisition[1].

How is this different from Fusion I/O devices, which have been around
for years?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+ Roman grave inscription +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make error - libpqdll.def No such file or directory

2016-01-17 Thread Igal @ Lucee.org

p.s. --

On 1/17/2016 3:24 PM, Igal @ Lucee.org wrote:

When running make I encounter the following error:

gcc.exe: error: libpqdll.def: No such file or directory
/home/Admin/sources/postgresql-9.5.0/src/Makefile.shlib:393: recipe 
for target 'libpq.dll' failed

make[3]: *** [libpq.dll] Error 1
make[3]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq'

Makefile:17: recipe for target 'all-libpq-recurse' failed
make[2]: *** [all-libpq-recurse] Error 2
make[2]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces'

Makefile:34: recipe for target 'all-interfaces-recurse' failed
make[1]: *** [all-interfaces-recurse] Error 2
make[1]: Leaving directory '/home/Admin/builds/postgresql-9.5.0/src'
GNUmakefile:11: recipe for target 'all-src-recurse' failed
make: *** [all-src-recurse] Error 2

But /home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq does 
contain the file libpqdll.def


The file /home/Admin/sources/postgresql-9.5.0/src/Makefile.shlib exists 
as well.


It's hard for me to decipher which file is reporting the error and which 
file is not found.


Any feedback would be greatly appreciated, thanks!


Any ideas?

--

Igal Sapir
Lucee Core Developer
Lucee.org 





Re: [HACKERS] Support for N synchronous standby servers - take 2

2016-01-17 Thread Michael Paquier
On Sun, Jan 17, 2016 at 11:09 PM, Masahiko Sawada  wrote:
> On Wed, Jan 13, 2016 at 1:54 AM, Alvaro Herrera wrote:
> * Confirm value of pg_stat_replication.sync_state (sync, async or potential)
> * Confirm that the data is synchronously replicated to multiple
> standbys in same cases.
>   * case 1 : The standby which is not listed in s_s_name, is down
>   * case 2 : The standby which is listed in s_s_names but potential
> standby, is down
>   * case 3 : The standby which is considered as sync standby, is down.
> * Standby promotion
>
> In order to confirm that the commit isn't done in case #3 forever
> unless new sync standby is up, I think we need the framework that
> cancels executing query.
> That is, what I'm planning is,
> 1. Set up master server (s_s_name = '2, standby1, standby2)
> 2. Set up two standby servers
> 3. Standby1 is down
> 4. Create some contents on master (But transaction is not committed)
> 5. Cancel the #4 query. (Also confirm that the flush location of only
> standby2 makes progress)

This will need some thinking and is not as easy as it sounds. There is
no way to hold on a connection after executing a query in the current
TAP infrastructure. You are just mentioning case 3, but actually cases
1 and 2 are falling into the same need: if there is a failure we want
to be able to not be stuck in the test forever and have a way to
cancel a query execution at will. TAP uses psql -c to execute any sql
queries, but we would need something that is far lower-level, and that
would be basically using the perl driver for Postgres or an equivalent
here.

Honestly for those tests I just thought that we could get to something
reliable by just looking at how each sync replication setup reflects
in pg_stat_replication as the flow is really getting complicated,
giving to the user a clear representation at SQL level of what is
actually occurring in the server depending on the configuration used
being important here.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] jsonb - jsonb operators

2016-01-17 Thread Dmitry Dolgov
>  if there's any future intention to add a delete operator that removes
element/pair matches?

I think the operator (jsonb - jsonb) is logical because we have a shallow
concatenation function (something like a "union" operation), but we have
nothing like "set difference" and "intersection" functions. Actually, I
thought to implement these functions (at least for jsonbx). But of course
this function should be quite simple and consider only full key/value
matching as a target.


Re: [HACKERS] pglogical - logical replication contrib module

2016-01-17 Thread Craig Ringer
On 17 January 2016 at 14:46, leo  wrote:

> I also run into same problem and waiting for bug fix.
> please update if new patch has published.
>
>
There's a point release coming soon that'll incorporate these fixes and a
number of others. It'll be posted here in a few days.


-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] pg_hba_lookup function to get all matching pg_hba.conf entries

2016-01-17 Thread Haribabu Kommi
On Thu, Dec 31, 2015 at 10:47 AM, Haribabu Kommi
 wrote:
> On Wed, Dec 30, 2015 at 9:48 PM, Shulgin, Oleksandr
>  wrote:
>> On Wed, Dec 30, 2015 at 4:31 AM, Haribabu Kommi 
>> wrote:
>>>
>>>
>>> Adding quotes to pg_hba_lookup function makes it different from others.
>>> The issues regarding the same is already discussed in [1].
>>>
>>> select a.database[1], b.datname from
>>> pg_hba_lookup('postgres','kommih','::1')
>>> as a, pg_database as b where a.database[1]
>>> = b.datname;
>>>
>>> The queries like above are not possible with quoted output. It is very
>>> rare that the
>>> pg_hba_lookup function used in join operations, but still it is better
>>> to provide
>>> data without quotes. so I reverted these changes in the attached latest
>>> patch.
>>
>>
>> That's a good point.  I wonder that maybe instead of re-introducing quotes
>> we could somehow make the unquoted keywords that have special meaning stand
>> out, e.g:
>>
>> database  | {$sameuser}
>> user_name | {$all}
>>
>> That should make it obvious which of the values are placeholders and doesn't
>> interfere with joining database or user catalogs (while I would call
>> "sameuser" a very unlikely name for a database, "all" might be not that
>> unlikely name for a user, e.g. someone called like "Albert L. Lucky" could
>> use that as a login name).
>
> It is not only the problem with joins, the following two cases works
> without quotes only.
> With quotes the query doesn't match with the database name.
>
> select * from pg_hba_lookup('Test', 'kommih','127.0.0.1') where
> database = '{"Test"}';
> select * from pg_hba_lookup('Test', 'kommih','127.0.0.1') where
> database = '{Test}';

Hi, Do you have any further comments on the patch that needs to be
taken care?

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2016-01-17 Thread Michael Paquier
On Sun, Jan 17, 2016 at 1:37 PM, Amit Kapila  wrote:
> On Sat, Jan 16, 2016 at 6:37 PM, Michael Paquier 
> wrote:
>>
>> On Sat, Jan 16, 2016 at 9:07 PM, Amit Kapila 
>> wrote:
>> > On Sat, Jan 16, 2016 at 5:08 PM, Michael Paquier
>> > 
>> > wrote:
>> >>
>> >> On Sat, Jan 16, 2016 at 7:10 PM, Amit Kapila 
>> >> wrote:
>> >> > Won't this be a problem if the checkpoint occurs after a long time
>> >> > and
>> >> > in
>> >> > the mean time there is some activity in the server?
>> >>
>> >> Why? If there is some activity on the server, the snapshot will be
>> >> immediately taken at the next iteration without caring about the
>> >> checkpoint.
>> >>
>> >
>> > + (insert_lsn % XLOG_SEG_SIZE) != SizeOfXLogLongPHD))
>> >
>> > Do you mean to intend that it is protected by above check in the
>> > patch?
>>
>> Yep, in combination with is_switch_current to check if the last
>> completed segment was forcibly switched.
>>
>> > Isn't it possible that so much WAL is inserted between bgwriter cycles,
>> > that when it checks the location of WAL, it founds it to be at the
>> > beginning
>> > of a new segment?
>>
>> Er, no. Even if the insert LSN is at the beginning of a new segment,
>> this would take a standby snapshot if the last segment was not
>> forcibly switched.
>>
>
> So here if I understand correctly the check related to the last segment
> forcibly switched is based on the fact that if it is forcibly switched, then
> we don't need to log this record? What is the reason of such an
> assumption?

Yes, the thing is that, as mentioned at the beginning of the thread, a
low value of archive_timeout causes a segment to be forcibly switched
at the end of the timeout defined by this parameter. In combination
with the standby snapshot taken in bgwriter since 9.4, this is causing
segments to be always switched even if a system is completely idle.
Before that, in 9.3 and older versions, we would have a segment
forcibly switched on an idle system only once per checkpoint. The
documentation is already clear about that actually. The whole point of
this patch is to fix this regression, aka switch to a new segment only
once per checkpoint.

> It is not very clear by reading the comments you have
> added in patch, may be you can expand comments slightly to explain
> the reason of same.

OK. Here are the comments that this patch is adding:
- * only log if enough time has passed and some xlog record has
- * been inserted.
+ * Only log if enough time has passed and some xlog record has
+ * been inserted on a new segment. On an idle system where
+ * segments can be archived in a fast pace with for example a
+ * low archive_command setting, avoid as well logging a new
+ * standby snapshot if the current insert position is still
+ * at the beginning of the segment that has just been switched.
+ *
+ * It is possible that GetXLogLastSwitchPtr points to the last
+ * position of previous segment or to the first position of the
+ * new segment after the switch, hence take both cases into
+ * account when deciding if a standby snapshot should be taken.
+ * (see comments on top of RequestXLogSwitch for more details).
  */
It makes mention of the problem that it is trying to fix, perhaps you
mean that this should mention the fact that on an idle system standby
snapshots should happen at worst once per checkpoint?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dealing with extension dependencies that aren't quite 'e'

2016-01-17 Thread Abhijit Menon-Sen
At 2016-01-16 12:18:53 -0500, robertmh...@gmail.com wrote:
>
> This seems like one manifestation of the more general problem that we
> don't have any real idea what objects a function definition depends
> on.

Yes.

I'm proposing to address a part of that problem by allowing extension
dependencies to be explicitly declared for functions and objects created
either by a user or dynamically by the extension itself—things that need
the extension to function, but aren't a part of it.

Put that way, ALTER EXTENSION doesn't sound like the way to do it. Maybe
ALTER FUNCTION … DEPENDS ON EXTENSION …? I don't particularly care how
the dependency is recorded, it's the dependency type that's important.

I'll post a patch along those lines in a bit, just so we have something
concrete to discuss; meanwhile, suggestions for another syntax to record
the dependency are welcome.

-- Abhijit


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Limit and inherited tables

2016-01-17 Thread Ashutosh Bapat
On Fri, Jan 15, 2016 at 9:47 PM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:

>
> This example is lacking indexes on the child tables, which is
>> why the plan shown is about as good as you're going to get.
>> The contents of foo1 and foo2 have to be read in entirety in any
>> case, and sorting them separately is not a win compared to doing
>> a single sort.
>>
> It is true, but not in case of FDW connected to remote host.
> In this case sending large volumes of data through network will be very
> inefficient.
>
> There will be no problem if FDW can provide index scan - in this case
> MergeAppend will fetch only required number of records:
>
> postgres=# explain analyze select * from t order by u limit 1;
>   QUERY PLAN
>
> ---
>  Limit  (cost=300.17..300.23 rows=1 width=8) (actual time=4.588..4.588
> rows=1 loops=1)
>->  Merge Append  (cost=300.17..762.76 rows=7681 width=8) (actual
> time=4.586..4.586 rows=1 loops=1)
>  Sort Key: t.u
>  ->  Index Scan using t_pkey on t  (cost=0.12..8.14 rows=1
> width=8) (actual time=0.003..0.003 rows=0 loops=1)
>  ->  Foreign Scan on t_fdw1  (cost=100.00..193.92 rows=2560
> width=8) (actual time=1.532..1.532 rows=1 loops=1)
>  ->  Foreign Scan on t_fdw2  (cost=100.00..193.92 rows=2560
> width=8) (actual time=1.510..1.510 rows=1 loops=1)
>  ->  Foreign Scan on t_fdw3  (cost=100.00..193.92 rows=2560
> width=8) (actual time=1.535..1.535 rows=1 loops=1)
>
> But if sort is performed by non-indexed fields, then current behaviour
> will be inefficient and can be significantly improved by pushing limits to
> remote hosts.
>
>
Pushing ORDER BY clause to the foreign server is supported with commit
f18c944b6137329ac4a6b2dce5745c5dc21a8578. Check if that helps to get sorted
data from the foreign server. LIMIT pushdown is not supported yet, though.


>
> --
> Konstantin Knizhnik
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] PGCon 2016 call for papers

2016-01-17 Thread Ioseph Kim
Hello, 
I want to speak a proposal on PGCon 2016.
Currently I wrote only title of contents.
Main title is "PostgreSQL in Korea".

That proposal contains

*  Short history of PostgreSQL in Korea (status of korean user group
and theses works)
*  kt (korea telecom) report (for PostgreSQL)
*  ToDo in Korea (of korean user group) 

What can I do for next step?


2016-01-03 (일), 16:58 -0500, Dan Langille:
> In case you've overlooked it, you have about two weeks to submit your 
> proposal.
> 
> PGCon 2016 will be on 17-21 May 2016 at University of Ottawa.
> 
> * 17-18 (Tue-Wed) tutorials
> * 19 & 20 (Thu-Fri) talks - the main part of the conference
> * 17 & 21 (Wed & Sat) The Developer Unconference & the User Unconference 
> (both very popular)
> 
> PLEASE NOTE: PGCon 2016 is in May.
> 
> See http://www.pgcon.org/2016/
> 
> We are now accepting proposals for the main part of the conference (19-20 
> May).
> Proposals can be quite simple. We do not require academic-style papers.
> 
> If you are doing something interesting with PostgreSQL, please submit
> a proposal.  You might be one of the backend hackers or work on a
> PostgreSQL related project and want to share your know-how with
> others. You might be developing an interesting system using PostgreSQL
> as the foundation. Perhaps you migrated from another database to
> PostgreSQL and would like to share details.  These, and other stories
> are welcome. Both users and developers are encouraged to share their
> experiences.
> 
> Here are a some ideas to jump start your proposal process:
> 
> - novel ways in which PostgreSQL is used
> - migration of production systems from another database
> - data warehousing
> - tuning PostgreSQL for different work loads
> - replication and clustering
> - hacking the PostgreSQL code
> - PostgreSQL derivatives and forks
> - applications built around PostgreSQL
> - benchmarking and performance engineering
> - case studies
> - location-aware and mapping software with PostGIS
> - The latest PostgreSQL features and features in development
> - research and teaching with PostgreSQL
> - things the PostgreSQL project could do better
> - integrating PostgreSQL with 3rd-party software
> 
> Both users and developers are encouraged to share their experiences.
> 
> The schedule is:
> 
> 1 Dec 2015 Proposal acceptance begins
> 19 Jan 2016 Proposal acceptance ends
> 19 Feb 2016 Confirmation of accepted proposals
> 
> NOTE: the call for lightning talks will go out very close to the conference.
> Do not submit lightning talks proposals until then.
> 
> See also 
> 
> Instructions for submitting a proposal to PGCon 2016 are available
> from: 
> 




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Interesting read on SCM upending software and hardware architecture

2016-01-17 Thread David Fetter
On Sun, Jan 17, 2016 at 11:13:33PM -0500, Bruce Momjian wrote:
> On Thu, Jan  7, 2016 at 02:30:06PM -0600, Jim Nasby wrote:
> > https://queue.acm.org/detail.cfm?id=2874238 discusses how modern
> > Storage Class Memory (SCM), such as PCIe SSD and NVDIMMs are
> > completely upending every assumption made about storage. To put
> > this in perspective, you can now see storage latency that is
> > practically on-par with things like lock acquisition[1].
> 
> How is this different from Fusion I/O devices, which have been
> around for years?

Price.

As these things come down in price, it'll start being more and more
reasonable to treat rotating media as exotic.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PGCon 2016 call for papers

2016-01-17 Thread Michael Paquier
On Mon, Jan 18, 2016 at 3:29 PM, Ioseph Kim  wrote:
> What can I do for next step?

(pgsql-hackers is not the right place to ask that, it is a mailing
list dedicated to the development and discussion of new features)

Follow the flow here:
http://www.pgcon.org/2016/papers.php
And Register here:
https://papers.pgcon.org/submission/PGCon2016/
And finally wait to see if your talk is accepted.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PGCon 2016 call for papers

2016-01-17 Thread Ioseph Kim
Sorry ^^

2016-01-18 (월), 16:10 +0900, Michael Paquier:
> On Mon, Jan 18, 2016 at 3:29 PM, Ioseph Kim  wrote:
> > What can I do for next step?
> 
> (pgsql-hackers is not the right place to ask that, it is a mailing
> list dedicated to the development and discussion of new features)
> 
> Follow the flow here:
> http://www.pgcon.org/2016/papers.php
> And Register here:
> https://papers.pgcon.org/submission/PGCon2016/
> And finally wait to see if your talk is accepted.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers