Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alex Shulgin

Alex Shulgin a...@commandprompt.com writes:

  * Why do you include xlog_internal.h in guc.c and not xlog.h?

 we actually need both but including xlog_internal.h also includes xlog.h
 i added xlog.h and if someone things is enough only putting
 xlog_internal.h let me know

 What's required from xlog_internal.h?

 Looks like this was addressed before me.

Actually, it's there to bring in MAXFNAMELEN which is used in
check_recovery_target_name.  Not sure how it even compiled without that,
but after some branch switching it doesn't anymore. :-p

Maybe we should move these check/assign hooks to xlog.c, but that's
likely going to create header files dependency problem due to use of
GucSource in the hook prototype...

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Change in HEAP_NEWPAGE logging makes diagnosis harder

2014-11-24 Thread Heikki Linnakangas

On 11/14/2014 10:17 AM, Heikki Linnakangas wrote:

On 10/30/2014 04:12 PM, Andres Freund wrote:

Hi,

I've just once more looked at the WAL stream ans was briefly confused
about all the XLOG_FPI records. Since 54685338e3
log_newpage/log_newpage_buffer and XLogSaveBufferForHint() use the same
WAL record. I personally find that a bad idea because they're used in
quite different situations.

Can we use different IDs again?


Yeah, we should do something about that. I'll fix that after the WAL
format changes stuff.


Ok, I added a new XLOG_FPI_FOR_HINT record type for this. It's the same 
as XLOG_FPI, but used in XLogSaveBufferForHint() so that the cases can 
be distinguished.


Looking at the callers of log_newpage, many of them could easily be 
replaced with rmgr-specific record types, if we don't want to club them 
all together as XLOG_FPI records. Another thought is to add a reason 
code to the XLOG_FPI record type. But there aren't that many callers of 
log_newpage ATM, and most of them are very low volume, like the 
buildempty AM methods to generate the init fork for unlogged indexes, so 
it's not very urgent.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] BDR duplicate key value violates unique constraint error

2014-11-24 Thread Jirayut Nimsaeng
Hi,

I'm using PostgreSQL BDR 9.4beta2 to test BDR capability right now.

$ psql --version
psql (PostgreSQL) 9.4beta2

I used database name bdrdemo for BDR then I've created tables with this DDL

CREATE TABLE DEPARTMENT(
   ID SERIAL PRIMARY KEY  NOT NULL,
   DEPT   CHAR(50) NOT NULL,
   EMP_ID INT  NOT NULL
);

I can confirm that both sides have table created with \d

bdrdemo=# \d
List of relations
 Schema |   Name|   Type   |  Owner
+---+--+--
 public | department| table| postgres
 public | department_id_seq | sequence | postgres
(2 rows)

then someone give me this command to make sure that serial primary key will
have it own sequence so I put it on both nodes

bdrdemo=# ALTER DATABASE bdrdemo SET default_sequenceam=department_id_seq;
ALTER DATABASE

Then I insert data with command

bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
('RANDOM_INSERT','1234');
INSERT 0 1

I can confirm it works on both side

bdrdemo=# SELECT * FROM department;
 id |dept| emp_id
++
  1 | RANDOM_INSERT  |   1234
(1 row)

But as you can see the id start from 1 instead of high number. I knew
because I got this working before and if you insert data from another node
I will get this error

bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
('RANDOM_INSERT','1234');
ERROR:  duplicate key value violates unique constraint department_pkey
DETAIL:  Key (id)=(1) already exists.

Anyone has idea on this?

Regard,
Jirayut


Re: [HACKERS] WIP: Access method extendability

2014-11-24 Thread Heikki Linnakangas

On 11/10/2014 10:30 PM, Alexander Korotkov wrote:

Don't allowing CREATE ACCESS METHOD command seems problematic for me. How
could it work with pg_upgrade? pg_dump wouldn't dump extra pg_am records.
So, pg_upgrade would break at creating operator classes on new cluster. So,
I agree with dropping create am command only if we let pg_dump to dump
extra pg_am records...


pg_dump would dump the CREATE EXTENSION command, and the extension's 
installation script inserts the row to pg_am. pg_dump doesn't dump 
objects that are part of an extension, so that's how this would work 
with the CREATE ACCESS METHOD command, too.


Backtracking a bit, to summarize the discussion so far:

* It would be nice to have indexes that are not WAL-logged, but are 
automatically re-created after a crash. But it's a completely different 
and orthogonal feature, so there's no need to discuss that further in 
this thread.


* If an extension is buggy, it can crash and corrupt the whole database. 
There isn't really anything we can do about that, and this patch doesn't 
make that any better or worse.


* CREATE ACCESS METHOD command isn't worth it.

Looking at the patch itself:

* It has bitrotted, thanks to my WAL format changes.

* The memcpy/memmove based API seems difficult to use correctly. Looking 
at the bloom filter example, it seems that you need a lot more code to 
implement WAL-logging with this, than you would with a rmgr-specific 
redo function. I was hoping this to make it simpler.


I think the API has to be more high-level. Or at least also provide a 
higher-level API, in addition to the low level one. Looking at the 
indexams we have, almost all pages use the standard page layout, with 
PageAddItem etc., plus a metapage, plus some indexam-specific metadata 
in the special area. The proposed API makes it pretty complicated to 
deal with that common case. After calling PageAddItem, you need intimate 
knowledge of what PageAddItem did, to create a correct WAL record for 
it. For PageIndexMultiDelete, it's next to impossible.


I think we'll have to accept that the WAL records with this generic API 
are going to be much bulkier than ones tailored for the AM. We could 
provide a compact representation for common operations like PageAddItem, 
but for any more complicated operations, just WAL-log a full page image, 
as it's too fiddly to track the changes at a finer level. For any 
serious performance critical stuff, you'll just have to write an 
old-fashioned rmgr.


(marking this as returned with feedback in the commitfest)

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Sequence Access Method WIP

2014-11-24 Thread Heikki Linnakangas

On 11/08/2014 04:21 PM, Simon Riggs wrote:

On 5 November 2014 17:32, Heikki Linnakangas hlinnakan...@vmware.com wrote:


Why does sequence_alloc need the current value? If it's a remote seqam,
the current value is kept in the remote server, and the last value that was
given to this PostgreSQL server is irrelevant.

That irks me with this API. The method for acquiring a new value isn't fully
abstracted behind the AM interface, as sequence.c still needs to track it
itself. That's useful for the local AM, of course, and maybe some others,
but for others it's totally useless.


Please bear in mind what seqam is for...

At present it's only use is to provide Global Sequences. There's a few
ways of doing that, but they all look sorta similar.


Ok.


The only other use we thought of was shot down, so producing the
perfect API isn't likely to help anyone. It's really not worth the
effort to produce a better API.


Your argument seems to be that because this API is only used for 
creating global sequences, it's not worth it to make it better for that 
purpose. That doesn't make much sense, so that's probably not what you 
meant. I'm confused.


To be clear: I don't think this API is very good for its stated purpose, 
for implementing global sequences for use in a cluster. For the reasons 
I've mentioned before.  I'd like to see two changes to this proposal:


1. Make the AM implementation solely responsible for remembering the 
last value. (if it's a global or remote sequence, the current value 
might not be stored in the local server at all)


2. Instead of the single amdata field, make it possible for the 
implementation to define any number of fields with any datatype in the 
tuple. That would make debugging, monitoring etc. easier.



BDR doesn't require Global Sequences, nor are Global Sequences
restricted in their use to just BDR - lots of cluster configurations
would want something like this.


Sure. (I don't see how that's relevant to this discussion, however).

(marking this as Returned with Feedback in the commitfest)

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BDR duplicate key value violates unique constraint error

2014-11-24 Thread Thom Brown
On 24 November 2014 at 09:55, Jirayut Nimsaeng jira...@proteus-tech.com
wrote:

 Hi,

 I'm using PostgreSQL BDR 9.4beta2 to test BDR capability right now.

 $ psql --version
 psql (PostgreSQL) 9.4beta2

 I used database name bdrdemo for BDR then I've created tables with this DDL

 CREATE TABLE DEPARTMENT(
ID SERIAL PRIMARY KEY  NOT NULL,
DEPT   CHAR(50) NOT NULL,
EMP_ID INT  NOT NULL
 );

 I can confirm that both sides have table created with \d

 bdrdemo=# \d
 List of relations
  Schema |   Name|   Type   |  Owner
 +---+--+--
  public | department| table| postgres
  public | department_id_seq | sequence | postgres
 (2 rows)

 then someone give me this command to make sure that serial primary key
 will have it own sequence so I put it on both nodes

 bdrdemo=# ALTER DATABASE bdrdemo SET default_sequenceam=department_id_seq;
 ALTER DATABASE

 Then I insert data with command

 bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
 ('RANDOM_INSERT','1234');
 INSERT 0 1

 I can confirm it works on both side

 bdrdemo=# SELECT * FROM department;
  id |dept| emp_id
 ++
   1 | RANDOM_INSERT  |   1234
 (1 row)

 But as you can see the id start from 1 instead of high number. I knew
 because I got this working before and if you insert data from another node
 I will get this error

 bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
 ('RANDOM_INSERT','1234');
 ERROR:  duplicate key value violates unique constraint department_pkey
 DETAIL:  Key (id)=(1) already exists.

 Anyone has idea on this?


You'll need to use global sequences with BDR:
https://wiki.postgresql.org/wiki/BDR_Global_Sequences

Thom


Re: [HACKERS] BDR duplicate key value violates unique constraint error

2014-11-24 Thread Jirayut Nimsaeng
NVM. I asked people in IRC and it turns out that after I used ALTER
DATABASE bdrdemo SET default_sequenceam=department_id_seq; command I have
to exit from psql session first and it works again :)

On Mon, Nov 24, 2014 at 6:29 PM, Thom Brown t...@linux.com wrote:

 On 24 November 2014 at 09:55, Jirayut Nimsaeng jira...@proteus-tech.com
 wrote:

 Hi,

 I'm using PostgreSQL BDR 9.4beta2 to test BDR capability right now.

 $ psql --version
 psql (PostgreSQL) 9.4beta2

 I used database name bdrdemo for BDR then I've created tables with this
 DDL

 CREATE TABLE DEPARTMENT(
ID SERIAL PRIMARY KEY  NOT NULL,
DEPT   CHAR(50) NOT NULL,
EMP_ID INT  NOT NULL
 );

 I can confirm that both sides have table created with \d

 bdrdemo=# \d
 List of relations
  Schema |   Name|   Type   |  Owner
 +---+--+--
  public | department| table| postgres
  public | department_id_seq | sequence | postgres
 (2 rows)

 then someone give me this command to make sure that serial primary key
 will have it own sequence so I put it on both nodes

 bdrdemo=# ALTER DATABASE bdrdemo SET default_sequenceam=department_id_seq;
 ALTER DATABASE

 Then I insert data with command

 bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
 ('RANDOM_INSERT','1234');
 INSERT 0 1

 I can confirm it works on both side

 bdrdemo=# SELECT * FROM department;
  id |dept| emp_id
 ++
   1 | RANDOM_INSERT  |   1234
 (1 row)

 But as you can see the id start from 1 instead of high number. I knew
 because I got this working before and if you insert data from another node
 I will get this error

 bdrdemo=# insert into DEPARTMENT (DEPT, EMP_ID) values
 ('RANDOM_INSERT','1234');
 ERROR:  duplicate key value violates unique constraint department_pkey
 DETAIL:  Key (id)=(1) already exists.

 Anyone has idea on this?


 You'll need to use global sequences with BDR:
 https://wiki.postgresql.org/wiki/BDR_Global_Sequences

 Thom



Re: [HACKERS] Stating the significance of Lehman Yao in the nbtree README

2014-11-24 Thread Heikki Linnakangas

On 09/27/2014 09:36 AM, Peter Geoghegan wrote:

On Fri, Sep 26, 2014 at 11:34 PM, Amit Kapila amit.kapil...@gmail.com wrote:

I have observed that this patch is in 'Needs Review' state for
next CF.  Do you expect any further review from myside?  I think
we can use text recommended by Heikki and after that if you
feel something more is still required, then you can update the same
and send an updated patch.  I believe it should be in 'Waiting On Author'
state, please do let me know if you feel otherwise.


I don't think so. Thanks for the review!


Ok, applied those extra paragraphs now, and marked as committed in the 
commitfest.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.5] Custom Plan API

2014-11-24 Thread Kouhei Kaigai
 Kouhei Kaigai kai...@ak.jp.nec.com writes:
  Let me explain the current idea of mine.
  CustomScan node will have a field that hold varnode mapping
  information that is constructed by custom-scan provider on
  create_customscan_plan, if they want. It is probably a list of varnode.
  If exists, setrefs.c changes its behavior; that updates varno/varattno
  of varnode according to this mapping, as if set_join_references() does
  based on indexed_tlist.
  To reference exct_scantuple, INDEX_VAR will be a best choice for varno
  of these varnodes, and index of the above varnode mapping list will be
  varattno. It can be utilized to make EXPLAIN output, instead of
  GetSpecialCustomVar hook.
 
  So, steps to go may be:
  (1) Add custom_private, custom_exprs, ... instead of self defined data
  type based on CustomXXX.
  (2) Rid of SetCustomScanRef and GetSpecialCustomVar hook for the current
  custom-scan support.
  (3) Integration of above varnode mapping feature within upcoming join
  replacement by custom-scan support.
 
 Well ... I still do not find this interesting, because I don't believe that
 CustomScan is a solution to anything interesting.  It's difficult enough
 to solve problems like expensive-function pushdown within the core code;
 why would we tie one hand behind our backs by insisting that they should
 be solved by extensions?  And as I mentioned before, we do need solutions
 to these problems in the core, regardless of CustomScan.
 
I'd like to split the anything interesting into two portions.
As you pointed out, the feature to push-down complicated expression
may need a bit large efforts (for remaining two commit-fest at least),
however, what the feature to replace join by custom-scan requires is
similar to job of set_join_references() because it never involves
translation between varnode and general expression.

Also, from my standpoint, a simple join replacement by custom-scan has
higher priority; join acceleration in v9.5 makes sense even if full-
functionality of pushing down general expression is not supported yet.

 I think that a useful way to go at this might be to think first about how
 to make use of expensive functions that have been cached in indexes, and
 then see how the solution to that might translate to pushing down expensive
 functions into FDWs and CustomScans.  If you start with the CustomScan
 aspect of it then you immediately find yourself trying to design APIs to
 divide up the solution, which is premature when you don't even know what
 the solution is.
 
Yep, it also seems to me remaining two commit fests are a bit tight
schedule to make consensus of overall design and to implement.
I'd like to focus on the simpler portion first.

 The rough idea I'd had about this is that while canvassing a relation's
 indexes (in get_relation_info), we could create a list of precomputed
 expressions that are available from indexes, then run through the query
 tree and replace any matching subexpressions with some Var-like nodes (or
 maybe better PlaceHolderVar-like nodes) that indicate that we can get this
 expression for free if we read the right index.
 If we do read the right index, such an expression reduces to a Var in the
 finished plan tree; if not, it reverts to the original expression.
 (Some thought would need to be given to the semantics when the index's table
 is underneath an outer join --- that may just mean that we can't necessarily
 replace every textually-matching subexpression, only those that are not
 above an outer join.) One question mark here is how to do the replace
 any matching subexpressions bit without O(lots) processing cost in big
 queries.  But that's probably just a SMOP.  The bigger issue I fear is that
 the planner is not currently structured to think that evaluation cost of
 expressions in the SELECT list has anything to do with which Path it should
 pick.  That is tied to the handwaving I've been doing for awhile now about
 converting all the upper-level planning logic into
 generate-and-compare-Paths style; we certainly cannot ignore tlist eval
 costs while making those decisions.  So at least for those upper-level Paths,
 we'd have to have a notion of what tlist we expect that plan level to compute,
 and charge appropriate evaluation costs.

Let me investigate the planner code more prior to comment on...

 So there's a lot of work there and I don't find that CustomScan looks like
 a solution to any of it.  CustomScan and FDWs could benefit from this work,
 in that we'd now have a way to deal with the concept that expensive functions
 (and aggregates, I hope) might be computed at the bottom scan level.  But
 it's folly to suppose that we can make it work just by hacking some
 arms-length extension code without any fundamental planner changes.
 
Indeed, I don't think it is a good idea to start from this harder portion.
Let's focus on just varno/varattno remapping to replace join relation by
custom-scan, as an immediate target.

Thanks,
--

Re: [HACKERS] add modulo (%) operator to pgbench

2014-11-24 Thread Heikki Linnakangas

On 09/25/2014 05:10 AM, Robert Haas wrote:

On Wed, Sep 24, 2014 at 2:34 PM, Fabien COELHO coe...@cri.ensmp.fr wrote:

Sigh.

How to transform a trivial 10 lines patch into a probably 500+ lines project
involving flex  bison  some non trivial data structures, and which may get
rejected on any ground...

Maybe I'll set that as a student project.


I think you're making a mountain out of a molehill.  I implemented
this today in about three hours.  The patch is attached.  It needs
more testing, documentation, and possibly some makefile adjustments,
but it seems to basically work.


Looks good to me. The new modulo operator needs documentation, and it 
could use a pgindent run, but other than that I think this is ready for 
commit.


It would be nice to go even further, and replace process_file, 
process_builtin, and process_commands altogether with a Bison parser. 
But this is definitely a step in the right direction.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alex Shulgin

Josh Berkus j...@agliodbs.com writes:
 
 Well, the latest version of this patch fails to start when it sees
 'recovery.conf' in PGDATA:
 
   FATAL:  recovery.conf is not supported anymore as a recovery method
   DETAIL:  Refer to appropriate documentation about migration methods
 
 I've missed all the discussion behind this decision and after reading
 the ALTER SYSTEM/conf.d thread I'm even more confused so I'd like
 someone more knowledgeable to speak up on the status of this.

 The argument was that people wanted to be able to have an empty
 recovery.conf file as a we're in standby trigger so that they could
 preserve backwards compatibility with external tools.  I don't agree
 with this argument, but several people championed it.

It doesn't look like we can provide for 100% backwards compatibility
with existing tools, here's why:

a) The only sensible way to use recovery.conf as GUC source is via
   include/include_dir from postgresql.conf.

b) The server used to rename $PGDATA/recovery.conf to
   $PGDATA/recovery.done so when it is re-started it doesn't
   accidentally start in recovery mode.

We can't keep doing (b) because that will make postgres fail to start on
a missing include.  Well, it won't error out if we place the file under
include_dir, as only files with the .conf suffix are included, but
then again the server wouldn't know which file to rename unless we tell
it or hard-code the the filename.  Either way this is not 100% backwards
compatible.

Now the question is if we are going to break compatibility anyway:
should we try to minimize the difference or will it disserve the user by
providing a false sense of compatibility?

For one, if we're breaking things, the trigger_file GUC should be
renamed to end_recovery_trigger_file or something like that, IMO.
That's if we decide to keep it at all.

 The docs use the term continuous recovery.
 
 Either way, from the code it is clear that we only stay in recovery if
 standby_mode is directly turned on.  This makes the whole check for a
 specially named file unnecessary, IMO: we should just check the value of
 standby_mode (which is off by default).

 So, what happens when someone does pg_ctl promote?  Somehow
 standby_mode needs to get set to off.  Maybe we write standby_mode =
 off to postgresql.auto.conf?

Well, standby_mode is only consulted at startup after crash recovery.
But suddenly, a tool that *writes* recovery.conf needs to also
consult/edit postgresql.auto.conf...  Maybe that's inevitable, but still
a point to consider.

 By the way, is there any use in setting standby_mode=on and any of the
 recovery_target* GUCs at the same time?

 See my thread on this topic from last week.  Short answer: No.

 I think it can only play together if you set the target farther than the
 latest point you've got in the archive locally.  So that's sort of
 Point-in-Future-Recovery.  Does that make any sense at all?

 Again, see the thread.  This doesn't work in a useful way, so there's no
 reason to write code to enable it.

Makes sense.  Should we incorporate the actual tech and doc fix in this
patch?

  /* File path names (all relative to $PGDATA) */
 -#define RECOVERY_COMMAND_FILE recovery.conf
 -#define RECOVERY_COMMAND_DONE recovery.done
 +#define RECOVERY_ENABLE_FILE  standby.enabled

 Imo the file and variable names should stay coherent.
 
 Yes, once we settle on the name (and if we really need that extra
 trigger file.)

 Personally, if we have three methods of promotion:

 1) pg_ctl promote
 2) edit postgresql.conf and reload
 3) ALTER SYSTEM SET and reload

 ... I don't honestly think we need a 4th method for promotion.

Me neither.  It is tempting to make everything spin around GUCs without
the need for any external trigger files.

 The main reason to want a we're in recovery file is for PITR rather
 than for replication, where it has a number of advantages as a method,
 the main one being that recovery.conf is unlikely to be overwritten by
 the contents of the backup.

 HOWEVER, there's a clear out for this with conf.d.  If we enable conf.d
 by default, then we can simply put recovery settings in conf.d, and
 since that specific file (which can have whatever name the user chooses)
 will not be part of backups, it would have the same advantage as
 recovery.conf, without the drawbacks.

 Discuss?

Well, I don't readily see how conf.d is special with regard to base
backup, wouldn't you need to exclude it explicitly still?

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: Access method extendability

2014-11-24 Thread Alexander Korotkov
Hi, Heikki!

Thank you for summarizing. In general, I agree with your notes with
some exceptions.

On Mon, Nov 24, 2014 at 1:52 PM, Heikki Linnakangas hlinnakan...@vmware.com
 wrote:

 On 11/10/2014 10:30 PM, Alexander Korotkov wrote:

 Don't allowing CREATE ACCESS METHOD command seems problematic for me. How
 could it work with pg_upgrade? pg_dump wouldn't dump extra pg_am records.
 So, pg_upgrade would break at creating operator classes on new cluster.
 So,
 I agree with dropping create am command only if we let pg_dump to dump
 extra pg_am records...


 pg_dump would dump the CREATE EXTENSION command, and the extension's
 installation script inserts the row to pg_am. pg_dump doesn't dump objects
 that are part of an extension, so that's how this would work with the
 CREATE ACCESS METHOD command, too.


In binary upgrade mode pg_dump have to guarantee that all database objects
will have same oids. That's why in binary upgrade mode pg_dump dumps
extension contents instead of just CREATE EXTENSION command.


 Backtracking a bit, to summarize the discussion so far:

 * It would be nice to have indexes that are not WAL-logged, but are
 automatically re-created after a crash. But it's a completely different and
 orthogonal feature, so there's no need to discuss that further in this
 thread.

 * If an extension is buggy, it can crash and corrupt the whole database.
 There isn't really anything we can do about that, and this patch doesn't
 make that any better or worse.

 * CREATE ACCESS METHOD command isn't worth it.


Taking into account my previous note, how can custom extensions survive
pg_upgrade without CREATE ACCESS METHOD command?

--
With best regards,
Alexander Korotkov.


[HACKERS] Replication connection URI?

2014-11-24 Thread Alex Shulgin

Hackers,

It appears that replication connection doesn't support URI but only the
traditional conninfo string.

src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:99: in 
libpqrcv_connect():

snprintf(conninfo_repl, sizeof(conninfo_repl),
 %s dbname=replication replication=true 
fallback_application_name=walreceiver,
 conninfo);

A patch to fix this welcome?

--
Alex

PS: I wrote the original URI parser used in libpq.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication connection URI?

2014-11-24 Thread Heikki Linnakangas

On 11/24/2014 02:41 PM, Alex Shulgin wrote:


Hackers,

It appears that replication connection doesn't support URI but only the
traditional conninfo string.

src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:99: in 
libpqrcv_connect():

 snprintf(conninfo_repl, sizeof(conninfo_repl),
  %s dbname=replication replication=true 
fallback_application_name=walreceiver,
  conninfo);

A patch to fix this welcome?


Yeah, seems like an oversight. Hopefully you can fix that without 
teaching libpqwalreceiver what connection URIs look like..


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] no test programs in contrib

2014-11-24 Thread Alvaro Herrera
What's the general opinion on having test programs somewhere other than
contrib/ ?

We currently have a number of subdirectories for test-only programs:

test_parser (a toy text search parser, added in 2007)
dummy_seclabel (for SECURITY LABEL regression testing, added Sept 2010)
worker_spi (for bgworkers, added Dec 2012)
test_shm_mq (test program for shared memory queues, added Jan 2014)
test_decoding (test program for logical decoding, added March 2014)

I'm now contemplating the addition on a new one in the commit-timestamps
patch, and I'm starting to feel that these are all misplaced.  I think
we have been dumping them to contrib not because they really belong
there, but because of the lack of a better place.  As opposed to the
rest of the stuff in contrib/, they don't serve any useful purpose on
themselves; they are just demonstrating some coding techniques, or
testing that some framework work as intended.  It seems impolite to
continue to pollute contrib with these; and my crystal ball says they
will continue to grow much more rapidly than normal, useful contrib
modules.

What would you say if we were to move them to src/test/?  I could also
see putting them in a brand new top-level directory, say testing/ or
testprg/.

Now, I know there is some resistance to the idea of moving source code
around.  If this proposal is objected to, would people object the idea
of putting the commit timestamp test module in src/test/commit_ts
instead of the patch author's proposal, contrib/test_committs?

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_class(relpersistence) of hash index

2014-11-24 Thread Antonin Houska
While checking how BM_PERMANENT flag is set (in buffer header), I noticed that
hash index has it set too. Shouldn't pg_class(relpersistence) be 'u' in this
case? Currently it's set to 'p':

postgres=# CREATE TABLE a(i int);
CREATE TABLE
postgres=# CREATE INDEX ON a USING HASH (i);
WARNING:  hash indexes are not WAL-logged and their use is discouraged
CREATE INDEX
postgres=# \d a
   Table public.a
 Column |  Type   | Modifiers 
+-+---
 i  | integer | 
Indexes:
a_i_idx hash (i)

postgres=# select relpersistence from pg_class where relname='a_i_idx';
 relpersistence 

 p
(1 row)

-- 
Antonin Houska
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-24 Thread Petr Jelinek

On 24/11/14 14:49, Alvaro Herrera wrote:

I'm now contemplating the addition on a new one in the commit-timestamps
patch, and I'm starting to feel that these are all misplaced.  I think
we have been dumping them to contrib not because they really belong
there, but because of the lack of a better place.  As opposed to the
rest of the stuff in contrib/, they don't serve any useful purpose on
themselves; they are just demonstrating some coding techniques, or
testing that some framework work as intended.  It seems impolite to
continue to pollute contrib with these; and my crystal ball says they
will continue to grow much more rapidly than normal, useful contrib
modules.



Completely agree.


What would you say if we were to move them to src/test/?  I could also
see putting them in a brand new top-level directory, say testing/ or
testprg/.

Now, I know there is some resistance to the idea of moving source code
around.  If this proposal is objected to, would people object the idea
of putting the commit timestamp test module in src/test/commit_ts
instead of the patch author's proposal, contrib/test_committs?



I'd go for src/test, but I think common subdirectory there is needed 
(src/test/something/commit_ts). Not sure what the something could 
be, maybe something like standalone as those tests get their own pg 
instance?


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [v9.5] Custom Plan API

2014-11-24 Thread Robert Haas
On Mon, Nov 24, 2014 at 6:57 AM, Kouhei Kaigai kai...@ak.jp.nec.com wrote:
 Indeed, I don't think it is a good idea to start from this harder portion.
 Let's focus on just varno/varattno remapping to replace join relation by
 custom-scan, as an immediate target.

We still need something like this for FDWs, as well.  The potential
gains there are enormous.  Anything we do had better fit in nicely
with that, rather than looking like a separate hack.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT {UPDATE | IGNORE}

2014-11-24 Thread Robert Haas
On Fri, Nov 21, 2014 at 3:38 PM, Peter Geoghegan p...@heroku.com wrote:
 What do other people think? Should RETURNING project updated tuples as
 well as inserted tuples, as described here?

I think it should.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Stephen Frost
* Amit Kapila (amit.kapil...@gmail.com) wrote:
 What exactly you mean by 'disable postgresql.auto.conf',  do you
 mean user runs Alter System to remove that entry or manually disable
 some particular entry?

Last I paid attention to this, there was a clear way to disable the
inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
gone, then there is a serious problem.  Administrators who manage their
postgresql.conf (eg: through a CM system like puppet or chef..) must
have a way to prevent other changes.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] no test programs in contrib

2014-11-24 Thread Andres Freund
Hi,

On 2014-11-24 10:49:45 -0300, Alvaro Herrera wrote:
 I'm now contemplating the addition on a new one in the commit-timestamps
 patch, and I'm starting to feel that these are all misplaced.  I think
 we have been dumping them to contrib not because they really belong
 there, but because of the lack of a better place.

Agreed.

 As opposed to the
 rest of the stuff in contrib/, they don't serve any useful purpose on
 themselves; they are just demonstrating some coding techniques, or
 testing that some framework work as intended.

I actually think that test_decoding is somewhat useful in other cases as
well, so it might be prudent to leave it there.

 What would you say if we were to move them to src/test/?  I could also
 see putting them in a brand new top-level directory, say testing/ or
 testprg/.

src/test/ is good, but I think there should be another subdirectory
inside. testcases/?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-11-24 Thread Christoph Berg
Hi,

I'm still seeing trouble with test_shm_mq on mipsel (9.4 rc1):

https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipselver=9.4~rc1-1stamp=1416547779

mips had the problem as well in the past (9.4 beta3):

https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-3stamp=1413607370
https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-1stamp=1412893135

The mips beta3 failures eventually went away when the build was done
on a different machine. This was the first time the mipsel build was
done on this build machine, so it seems the problem might well be
caused by some subarchitecture difference.

mipsel:
bad: Broadcom BCM91250A aka SWARM (mayer.debian.org)
good: Lemote 3A ITX-A1101 (Quad Core Loongson 3A) (mipsel-manda-01.debian.org)

mips:
bad: Broadcom BCM91250A aka SWARM (ball.debian.org)
good: EdgeRouter Pro (mips-aql-02.debian.org)
good: 16 x Cavium Octeon V0.3 (lucatelli.debian.org)

https://buildd.debian.org/status/logs.php?pkg=postgresql-9.4arch=mipsel
https://buildd.debian.org/status/logs.php?pkg=postgresql-9.4arch=mips
(Not all failures listed there are due to shm_mq, just the newer ones.)

At the moment all I have is the above build logs with the fact that
the build simply hangs there, i.e. I haven't seen the problem in a
context yet where I could have pulled a backtrace or the like.

Anyone got an idea?

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alvaro Herrera
Stephen Frost wrote:
 * Amit Kapila (amit.kapil...@gmail.com) wrote:
  What exactly you mean by 'disable postgresql.auto.conf',  do you
  mean user runs Alter System to remove that entry or manually disable
  some particular entry?
 
 Last I paid attention to this, there was a clear way to disable the
 inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
 gone, then there is a serious problem.  Administrators who manage their
 postgresql.conf (eg: through a CM system like puppet or chef..) must
 have a way to prevent other changes.

Sigh, here we go again.  I don't think you can disable postgresql.auto.conf
in the current code.  As I recall, the argument was that it's harder to
diagnose problems if postgresql.auto.conf takes effect in some system
but not others.

I think if you want puppet or chef etc you'd add postgresql.auto.conf as
a config file in those systems, so that ALTER SYSTEM is reflected there.  

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Stephen Frost
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
 Stephen Frost wrote:
  * Amit Kapila (amit.kapil...@gmail.com) wrote:
   What exactly you mean by 'disable postgresql.auto.conf',  do you
   mean user runs Alter System to remove that entry or manually disable
   some particular entry?
  
  Last I paid attention to this, there was a clear way to disable the
  inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
  gone, then there is a serious problem.  Administrators who manage their
  postgresql.conf (eg: through a CM system like puppet or chef..) must
  have a way to prevent other changes.
 
 Sigh, here we go again.  I don't think you can disable postgresql.auto.conf
 in the current code.  As I recall, the argument was that it's harder to
 diagnose problems if postgresql.auto.conf takes effect in some system
 but not others.

I don't buy this at all.  What's going to be terribly confusing is to
have config changes start happening for users who are managing their
configs through a CM (which most should be..).  ALTER SYSTEM is going to
cause more problems than it solves.

 I think if you want puppet or chef etc you'd add postgresql.auto.conf as
 a config file in those systems, so that ALTER SYSTEM is reflected there.  

That's really a horrible, horrible answer.  The DBA makes some change
and then reloads remotely, only to have puppet or chef come along and
change it back later?  Talk about a recipe for disaster.

The only reason I stopped worrying about the foolishness of ALTER SYSTEM
was because it could be disabled.  I'm very disappointed to hear that
someone saw fit to remove that.  I'll also predict that it'll be going
back in for 9.5.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alvaro Herrera
Stephen Frost wrote:
 * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:

   Last I paid attention to this, there was a clear way to disable the
   inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
   gone, then there is a serious problem.  Administrators who manage their
   postgresql.conf (eg: through a CM system like puppet or chef..) must
   have a way to prevent other changes.
  
  Sigh, here we go again.  I don't think you can disable postgresql.auto.conf
  in the current code.  As I recall, the argument was that it's harder to
  diagnose problems if postgresql.auto.conf takes effect in some system
  but not others.
 
 I don't buy this at all.  What's going to be terribly confusing is to
 have config changes start happening for users who are managing their
 configs through a CM (which most should be..).  ALTER SYSTEM is going to
 cause more problems than it solves.

I guess if you have two DBAs who don't talk to each other, and one
changes things through puppet and another through ALTER SYSTEM, it's
going to be confusing, yes.

  I think if you want puppet or chef etc you'd add postgresql.auto.conf as
  a config file in those systems, so that ALTER SYSTEM is reflected there.  
 
 That's really a horrible, horrible answer.  The DBA makes some change
 and then reloads remotely, only to have puppet or chef come along and
 change it back later?  Talk about a recipe for disaster.

Are you saying puppet/chef don't have the concept that a file is to be
backed up and changes on it notified, but that direct changes to it
should not be allowed?  That sounds, um, limited.

 The only reason I stopped worrying about the foolishness of ALTER SYSTEM
 was because it could be disabled.  I'm very disappointed to hear that
 someone saw fit to remove that.  I'll also predict that it'll be going
 back in for 9.5.

*shrug*

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Stephen Frost
* Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
 Stephen Frost wrote:
  * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
 
Last I paid attention to this, there was a clear way to disable the
inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
gone, then there is a serious problem.  Administrators who manage their
postgresql.conf (eg: through a CM system like puppet or chef..) must
have a way to prevent other changes.
   
   Sigh, here we go again.  I don't think you can disable 
   postgresql.auto.conf
   in the current code.  As I recall, the argument was that it's harder to
   diagnose problems if postgresql.auto.conf takes effect in some system
   but not others.
  
  I don't buy this at all.  What's going to be terribly confusing is to
  have config changes start happening for users who are managing their
  configs through a CM (which most should be..).  ALTER SYSTEM is going to
  cause more problems than it solves.
 
 I guess if you have two DBAs who don't talk to each other, and one
 changes things through puppet and another through ALTER SYSTEM, it's
 going to be confusing, yes.

It's not DBAs, that's the point..  You have sysadmins who manage the
system configs (things like postgresql.conf) and you have DBAs whose
only access to the system is through 5432.  This seperation of
responsibilities is very common, in my experience at least, and
conflating the two through ALTER SYSTEM is going to cause nothing but
problems.  There had been a way to keep that seperation by simply
disabling the postgresql.auto.conf, but that's now been removed.

   I think if you want puppet or chef etc you'd add postgresql.auto.conf as
   a config file in those systems, so that ALTER SYSTEM is reflected there.  
  
  That's really a horrible, horrible answer.  The DBA makes some change
  and then reloads remotely, only to have puppet or chef come along and
  change it back later?  Talk about a recipe for disaster.
 
 Are you saying puppet/chef don't have the concept that a file is to be
 backed up and changes on it notified, but that direct changes to it
 should not be allowed?  That sounds, um, limited.

Of course they can but that's completely missing the point.  The
postgresql.conf file is *already* managed in puppet or chef in a *lot*
of places.  We're removing the ability to do that and reverting to a
situation where auditing has to be done instead.  That's a regression,
not a step forward.

  The only reason I stopped worrying about the foolishness of ALTER SYSTEM
  was because it could be disabled.  I'm very disappointed to hear that
  someone saw fit to remove that.  I'll also predict that it'll be going
  back in for 9.5.
 
 *shrug*

... and I'll continue to argue against anything which requires
postgresql.auto.conf to be hacked to work (as was proposed on this
thread..).

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Andres Freund
On 2014-11-24 10:13:58 -0500, Stephen Frost wrote:
 * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
  Stephen Frost wrote:
   * Amit Kapila (amit.kapil...@gmail.com) wrote:
What exactly you mean by 'disable postgresql.auto.conf',  do you
mean user runs Alter System to remove that entry or manually disable
some particular entry?
   
   Last I paid attention to this, there was a clear way to disable the
   inclusion of postgresql.auto.conf in the postgresql.conf.  If that's
   gone, then there is a serious problem.  Administrators who manage their
   postgresql.conf (eg: through a CM system like puppet or chef..) must
   have a way to prevent other changes.
  
  Sigh, here we go again.  I don't think you can disable postgresql.auto.conf
  in the current code.  As I recall, the argument was that it's harder to
  diagnose problems if postgresql.auto.conf takes effect in some system
  but not others.
 
 I don't buy this at all.  What's going to be terribly confusing is to
 have config changes start happening for users who are managing their
 configs through a CM (which most should be..).  ALTER SYSTEM is going to
 cause more problems than it solves.

I fail to see how this really has really anything to do with this
topic. Obviously ALTER SYSTEM isn't a applicable solution for this as HS
might not be in use or HS might not have reached consistency at that
point.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Tom Lane
Alex Shulgin a...@commandprompt.com writes:
 Maybe we should move these check/assign hooks to xlog.c, but that's
 likely going to create header files dependency problem due to use of
 GucSource in the hook prototype...

As far as that goes, there is already plenty of precedent for declaring
assorted check/assign hook functions in guc.h rather than including guc.h
into the headers where they would otherwise belong.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-24 Thread Tom Lane
Alvaro Herrera alvhe...@2ndquadrant.com writes:
 We currently have a number of subdirectories for test-only programs:

 test_parser (a toy text search parser, added in 2007)
 dummy_seclabel (for SECURITY LABEL regression testing, added Sept 2010)
 worker_spi (for bgworkers, added Dec 2012)
 test_shm_mq (test program for shared memory queues, added Jan 2014)
 test_decoding (test program for logical decoding, added March 2014)

 What would you say if we were to move them to src/test/?  I could also
 see putting them in a brand new top-level directory, say testing/ or
 testprg/.

I think that test_parser is arguably useful as a skeleton/example for
user-written TS parsers, so I'd lean towards leaving it where it is,
but the others could move to src/test/ IMO.

 Now, I know there is some resistance to the idea of moving source code
 around.

Usually that's when there is (a) a lot of history and (b) concern about
back-patching fixes.  Neither of those arguments seem real strong for
these modules, with the possible exception of test_parser.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_class(relpersistence) of hash index

2014-11-24 Thread Tom Lane
Antonin Houska a...@cybertec.at writes:
 While checking how BM_PERMANENT flag is set (in buffer header), I noticed that
 hash index has it set too. Shouldn't pg_class(relpersistence) be 'u' in this
 case?

See archives; we do not currently have a way to support unlogged indexes
on logged tables.  The whole hash-index mess would be better if we did,
but getting there is less than trivial.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Replication connection URI?

2014-11-24 Thread Alex Shulgin
Heikki Linnakangas hlinnakan...@vmware.com writes:

 It appears that replication connection doesn't support URI but only the
 traditional conninfo string.

 src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:99: in 
 libpqrcv_connect():

  snprintf(conninfo_repl, sizeof(conninfo_repl),
   %s dbname=replication replication=true 
 fallback_application_name=walreceiver,
   conninfo);

 A patch to fix this welcome?

 Yeah, seems like an oversight. Hopefully you can fix that without
 teaching libpqwalreceiver what connection URIs look like..

Please see attached.  We're lucky that PQconnectdbParams has an option
to parse and expand the first dbname parameter if it looks like a
connection string (or a URI).

The first patch is not on topic, I just spotted this missing check.

The second one is a self-contained fix, but the third one which is the
actual patch depends on the second one, because it specifies the dbname
keyword two times: first to parse the conninfo/URI, then to override any
dbname provided by the user with replication pseudo-database name.

Have a nice day!
--
Alex

From 156e6faa96ad6a2ce58055ad72883ed78c576e5b Mon Sep 17 00:00:00 2001
From: Alex Shulgin a...@commandprompt.com
Date: Mon, 24 Nov 2014 16:55:50 +0300
Subject: [PATCH 1/3] Add missing check on OOM in expand_dbname path of
 conninfo_array_parse().

---
 src/interfaces/libpq/fe-connect.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
new file mode 100644
index 3fe8c21..d7f2ec2
*** a/src/interfaces/libpq/fe-connect.c
--- b/src/interfaces/libpq/fe-connect.c
*** conninfo_array_parse(const char *const *
*** 4402,4407 
--- 4402,4415 
  if (options[k].val)
  	free(options[k].val);
  options[k].val = strdup(str_option-val);
+ if (!options[k].val)
+ {
+ 	printfPQExpBuffer(errorMessage,
+ 	  libpq_gettext(out of memory\n));
+ 	PQconninfoFree(options);
+ 	PQconninfoFree(dbname_options);
+ 	return NULL;
+ }
  break;
  			}
  		}
-- 
2.1.0

From 44d9d6a2c9cf5af83988f9d3b6eeb39c36104ef9 Mon Sep 17 00:00:00 2001
From: Alex Shulgin a...@commandprompt.com
Date: Mon, 24 Nov 2014 18:12:51 +0300
Subject: [PATCH 2/3] Allow further dbname=value to override conninfo parsed
 from an expanded dbname in conninfo_array_parse().

---
 src/interfaces/libpq/fe-connect.c | 18 +-
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
new file mode 100644
index d7f2ec2..5b45128
*** a/src/interfaces/libpq/fe-connect.c
--- b/src/interfaces/libpq/fe-connect.c
*** conninfo_parse(const char *conninfo, PQE
*** 4302,4311 
   * Defaults are supplied (from a service file, environment variables, etc)
   * for unspecified options, but only if use_defaults is TRUE.
   *
!  * If expand_dbname is non-zero, and the value passed for keyword dbname is a
!  * connection string (as indicated by recognized_connection_string) then parse
!  * and process it, overriding any previously processed conflicting
!  * keywords. Subsequent keywords will take precedence, however.
   */
  static PQconninfoOption *
  conninfo_array_parse(const char *const * keywords, const char *const * values,
--- 4302,4313 
   * Defaults are supplied (from a service file, environment variables, etc)
   * for unspecified options, but only if use_defaults is TRUE.
   *
!  * If expand_dbname is non-zero, and the value passed for the first occurrence
!  * of dbname keyword is a connection string (as indicated by
!  * recognized_connection_string) then parse and process it, overriding any
!  * previously processed conflicting keywords. Subsequent keywords will take
!  * precedence, however. In particular, a further occurrence of dbname may
!  * override the dbname provided in the connection string.
   */
  static PQconninfoOption *
  conninfo_array_parse(const char *const * keywords, const char *const * values,
*** conninfo_array_parse(const char *const *
*** 4381,4387 
  			}
  
  			/*
! 			 * If we are on the dbname parameter, and we have a parsed
  			 * connection string, copy those parameters across, overriding any
  			 * existing previous settings.
  			 */
--- 4383,4389 
  			}
  
  			/*
! 			 * If we are on the *first* dbname parameter, and we have a parsed
  			 * connection string, copy those parameters across, overriding any
  			 * existing previous settings.
  			 */
*** conninfo_array_parse(const char *const *
*** 4415,4420 
--- 4417,4428 
  		}
  	}
  }
+ /*
+  * Clear out the parsed dbname, so that a possible further
+  * occurrence of dbname have the chance to override it.
+  */
+ PQconninfoFree(dbname_options);
+ dbname_options = NULL;
  			}
  			else
  			{
-- 
2.1.0


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread David G Johnston
Stephen Frost wrote
 * Alvaro Herrera (

 alvherre@

 ) wrote:
 Stephen Frost wrote:
  * Amit Kapila (

 amit.kapila16@

 ) wrote:
   What exactly you mean by 'disable postgresql.auto.conf',  do you
   mean user runs Alter System to remove that entry or manually disable
   some particular entry?
 
 Sigh, here we go again.  I don't think you can disable
 postgresql.auto.conf
 in the current code.  As I recall, the argument was that it's harder to
 diagnose problems if postgresql.auto.conf takes effect in some system
 but not others.
 
 I don't buy this at all.  What's going to be terribly confusing is to
 have config changes start happening for users who are managing their
 configs through a CM (which most should be..).  ALTER SYSTEM is going to
 cause more problems than it solves.

So what happens if someone makes postgresql.auto.conf read-only (to
everyone)?

David J.




--
View this message in context: 
http://postgresql.nabble.com/Turning-recovery-conf-into-GUCs-tp5774757p5828052.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] superuser() shortcuts

2014-11-24 Thread Stephen Frost
* Andres Freund (and...@2ndquadrant.com) wrote:
 On 2014-11-21 10:12:40 -0500, Stephen Frost wrote:
  * Andres Freund (and...@2ndquadrant.com) wrote:
   I still think this change makes the error message more verbose, without
   any win in clarity.
  
  Can we agree that there should be consistency?
 
 Consistency with what? Are you thinking of the messages in
 aclck.c:no_priv_msg? I don't think that's really comparable. A
 permission denied on a relation is much easier to understand than
 replication permissions and such.

The discussion around wording started here, I believe:

20141022231834.ga1...@alvin.alvh.no-ip.org

Perhaps more to your question though, all checks of
'have_createdb_privilege' return 'permission denied to' style errors,
'have_createrole_privilege' returns 'permission denied' style for all
except where it returns the more specific 'must have admin option',
the 'has_rolcatupdate' check returns 'permission denied', and the
'has_bypassrls_privilege' check returns 'insufficient privilege' (note:
I'm in favor of changing that to use 'permission denied' instead too).

With regard to ereport() calls which return
ERRCODE_INSUFFICIENT_PRIVILEGE, things are pretty mixed up.  Some places
places say 'permission denied to' and then have 'must be superuser' as a
hint while others just say 'must be superuser' and then others are just
'permission denied' (such as aclchk.c:no_priv_msg).

 It'd surely not be better if pg_basebackup would a error message bar
 actually helpful information.

ENOPARSE.  I certainly agree that we want useful information to be
returned, in general..

 Btw, the replication permission use in
 postinit.c isn't related to slots.

Err, no, of course not, that should still be referring to starting
walsender.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-11-24 Thread Robert Haas
On Mon, Nov 24, 2014 at 9:36 AM, Christoph Berg c...@df7cb.de wrote:
 I'm still seeing trouble with test_shm_mq on mipsel (9.4 rc1):

Boy, that test has certainly caught its share of bugs, and not in the
places I would have expected.

The last round of wrestling with this had to do with working around
HP-UX behavior that differs from Linux.  So it seems like this is
likely to be an altogether different failure than what we saw on
anole.

 https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipselver=9.4~rc1-1stamp=1416547779

 mips had the problem as well in the past (9.4 beta3):

 https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-3stamp=1413607370
 https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-1stamp=1412893135

For how long did was it hung before you killed it?

 The mips beta3 failures eventually went away when the build was done
 on a different machine. This was the first time the mipsel build was
 done on this build machine, so it seems the problem might well be
 caused by some subarchitecture difference.

Does it fail every time when run on a machine where it fails sometimes?

It might not be related to the subarchitecture difference in any
particularly interesting way; it could just be a race condition that
is triggered, or not, depending on the precise timing of things, which
might vary based on subarchitecture, compiler, running kernel version,
etc.

 Anyone got an idea?

Not off-hand.  :-(

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Jaime Casanova
On Fri, Nov 21, 2014 at 12:55 PM, Josh Berkus j...@agliodbs.com wrote:
 On 11/21/2014 09:35 AM, Alex Shulgin wrote:
 Hello,

 Here's an attempt to revive this patch.

 Yayy!  Thank you.

 People might not like me for the suggestion, but I think we should
 simply always include a 'recovery.conf' in $PGDATA
 unconditionally. That'd make this easier.
 Alternatively we could pass a filename to --write-recovery-conf.

 Well, the latest version of this patch fails to start when it sees
 'recovery.conf' in PGDATA:

   FATAL:  recovery.conf is not supported anymore as a recovery method
   DETAIL:  Refer to appropriate documentation about migration methods

 I've missed all the discussion behind this decision and after reading
 the ALTER SYSTEM/conf.d thread I'm even more confused so I'd like
 someone more knowledgeable to speak up on the status of this.

 The argument was that people wanted to be able to have an empty
 recovery.conf file as a we're in standby trigger so that they could
 preserve backwards compatibility with external tools.  I don't agree
 with this argument, but several people championed it.


well, my approach was that postgres just ignore the file completely. I
mean, recovery.conf will no longer mean anything special.
Then, every tool that create recovery.conf in $PGDATA only has to add
an ALTER SYSTEM to include it

 The docs use the term continuous recovery.

 Either way, from the code it is clear that we only stay in recovery if
 standby_mode is directly turned on.  This makes the whole check for a
 specially named file unnecessary, IMO: we should just check the value of
 standby_mode (which is off by default).


no. currently we enter in recovery mode when postgres see a
recovery.conf and stays in recovery mode when standby_mode is on or an
appropiate restore_command is provided.

which means recovery.conf has two uses:
1) start in recovery mode (not continuous)
2) provide parameters for recovery mode and for streaming

we still need a recovery trigger file that forces postgres to start
in recovery mode and acts accordingly to recovery GUCs

 So, what happens when someone does pg_ctl promote?  Somehow
 standby_mode needs to get set to off.  Maybe we write standby_mode =
 off to postgresql.auto.conf?


we need to delete or rename the recovery trigger file, all standby
GUCs are ignored (and recovery GUCs should be ignored too) unless
you're in recovery mode

 By the way, is there any use in setting standby_mode=on and any of the
 recovery_target* GUCs at the same time?

 See my thread on this topic from last week.  Short answer: No.


haven't read that thread, will do


  /* File path names (all relative to $PGDATA) */
 -#define RECOVERY_COMMAND_FILE  recovery.conf
 -#define RECOVERY_COMMAND_DONE  recovery.done
 +#define RECOVERY_ENABLE_FILE   standby.enabled

 Imo the file and variable names should stay coherent.

 Yes, once we settle on the name (and if we really need that extra
 trigger file.)


yes, we need it. but other names were suggested standby.enabled
transmit the wrong idea

 Personally, if we have three methods of promotion:

 1) pg_ctl promote
 2) edit postgresql.conf and reload
 3) ALTER SYSTEM SET and reload

 ... I don't honestly think we need a 4th method for promotion.


this is not for promotion, this is to force postgres to start in
recovery mode and read recovery configuration parameters.

 The main reason to want a we're in recovery file is for PITR rather
 than for replication, where it has a number of advantages as a method,
 the main one being that recovery.conf is unlikely to be overwritten by
 the contents of the backup.


only that you need to start in recovery mode to start replication

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Stating the significance of Lehman Yao in the nbtree README

2014-11-24 Thread Peter Geoghegan
On Mon, Nov 24, 2014 at 3:51 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 Ok, applied those extra paragraphs now, and marked as committed in the
 commitfest.

Thanks!

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Jaime Casanova
On Mon, Nov 24, 2014 at 12:24 PM, Jaime Casanova ja...@2ndquadrant.com wrote:
 On Fri, Nov 21, 2014 at 12:55 PM, Josh Berkus j...@agliodbs.com wrote:
 On 11/21/2014 09:35 AM, Alex Shulgin wrote:
 Hello,

 Here's an attempt to revive this patch.

 Yayy!  Thank you.

 People might not like me for the suggestion, but I think we should
 simply always include a 'recovery.conf' in $PGDATA
 unconditionally. That'd make this easier.
 Alternatively we could pass a filename to --write-recovery-conf.

 Well, the latest version of this patch fails to start when it sees
 'recovery.conf' in PGDATA:

   FATAL:  recovery.conf is not supported anymore as a recovery method
   DETAIL:  Refer to appropriate documentation about migration methods

 I've missed all the discussion behind this decision and after reading
 the ALTER SYSTEM/conf.d thread I'm even more confused so I'd like
 someone more knowledgeable to speak up on the status of this.

 The argument was that people wanted to be able to have an empty
 recovery.conf file as a we're in standby trigger so that they could
 preserve backwards compatibility with external tools.  I don't agree
 with this argument, but several people championed it.


 well, my approach was that postgres just ignore the file completely. I
 mean, recovery.conf will no longer mean anything special.
 Then, every tool that create recovery.conf in $PGDATA only has to add
 an ALTER SYSTEM to include it


i mean ALTER SYSTEM in master before copying or just add the line in
postgresql.conf

but, as the patch shows agreement was to break backwards compatibility
and fail to start if the file is present

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: plpgsql - Assert statement

2014-11-24 Thread Robert Haas
On Sun, Nov 23, 2014 at 4:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I wrote:
 The core of that complaint is that we'd have to make ASSERT a plpgsql
 reserved word, which is true enough as things stand today.  However,
 why is it that plpgsql statement-introducing keywords need to be
 reserved?  The only reason for that AFAICS is to allow us to distinguish
 the statements from assignments.  But it seems like that could possibly
 be gotten around with some work.

 Attached is a draft patch that de-reserves 17 of plpgsql's existing
 reserved words, leaving 24 still reserved (of which only 9 are not also
 reserved in the core grammar).  This would make it painless to invent an
 ASSERT statement, as well as any other new statement type that's not
 associated with looping or block structure.  (The limiting factor on those
 is that if a statement could have an opt_block_label, its keyword still
 has to be reserved, unless we want to complicate matters a bunch more.)

I like the idea of making these keywords less-reserved, but I'm
wondering how future-proof it is.  It seems to rely heavily on the
fact that the syntax for lvalues is extremely restricted.  Allowing
foo(bar) as an lvalue, for example, would pretty much require
completely reverting this, AFAICS, as would any other type of lvalue
that needs more than one token of lookahead to identify.  How sure are
we that we're never going to want to do something like that?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: plpgsql - Assert statement

2014-11-24 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 On Sun, Nov 23, 2014 at 4:40 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Attached is a draft patch that de-reserves 17 of plpgsql's existing
 reserved words, leaving 24 still reserved (of which only 9 are not also
 reserved in the core grammar).  This would make it painless to invent an
 ASSERT statement, as well as any other new statement type that's not
 associated with looping or block structure.  (The limiting factor on those
 is that if a statement could have an opt_block_label, its keyword still
 has to be reserved, unless we want to complicate matters a bunch more.)

 I like the idea of making these keywords less-reserved, but I'm
 wondering how future-proof it is.  It seems to rely heavily on the
 fact that the syntax for lvalues is extremely restricted.  Allowing
 foo(bar) as an lvalue, for example, would pretty much require
 completely reverting this, AFAICS, as would any other type of lvalue
 that needs more than one token of lookahead to identify.  How sure are
 we that we're never going to want to do something like that?

Two comments on that:

1. What is the likely use-case for such a thing (considering SQL is not
exactly friendly to pointers or reference types), and is it more
interesting than new statements that we are going to reject on the grounds
that we don't want to reserve any more plpgsql keywords?

2. The actual behavior would be the same as it would be for the case of
implicit-CALL that we discussed upthread.  Namely, that it'd work just
fine as long as foo isn't any unreserved keyword.  So the net effect
would be that plpgsql keywords would be sort of semi-reserved, much like
col_name_keywords in the core grammar: you can use 'em as variable names
but not necessarily as function names.  With the current behavior where
they're fully reserved, you can't use them as either, not even where the
context is completely unambiguous.  I have not heard anyone complaining
that col_name_keyword is a dumb idea and we should make all keywords fully
reserved.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] group locking: incomplete patch, just for discussion

2014-11-24 Thread Robert Haas
On Thu, Nov 20, 2014 at 4:45 PM, Andres Freund and...@2ndquadrant.com wrote:
 How about a frontend process having created a relation that then starts
 a parallel query. Then the frontend process ERRORs out and, in the
 course of that, does smgrDoPendingDeletes(). Which will delete the new
 relation. Boom. The background process might just be accessing it. If
 you think thats harmless, think e.g. what'd happen with heap cleanup
 records generated in the background process. They'd not replay.

I spent some time thinking about this case.  I don't think this is
really principally a locking issue; I think it it's really a timing
issue around transaction abort.  I believe that, if we wanted to be
able to write any tuples at all from within a parallel worker, there
are certain phases of transaction abort processing that would need to
happen only once we're sure that all of the backends involved have
done their local abort processing.  smgrDoPendingDeletes() is one, but
I think we have pretty much the same problem with
RecordTransactionAbort() and ProcArrayEndTransaction().  Without some
synchronization, it would be possible for a parallel backend to stamp
a tuple with an XID after the abort record is already written, or
after the transaction has already been removed from the ProcArray.
Are we prepared to view that sort of thing as harmless?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: plpgsql - Assert statement

2014-11-24 Thread Robert Haas
On Mon, Nov 24, 2014 at 2:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Two comments on that:

 1. What is the likely use-case for such a thing (considering SQL is not
 exactly friendly to pointers or reference types),

I happen to know that Oracle supports more possible LHS syntaxes in
PL/SQL than we do in PL/pgsql, including things like foo(1) := 3.
There is more than one problem with supporting that syntax in
PL/pgSQL, and I haven't heard anyone complaining about its absence.
But it doesn't have to be that thing particularly: anything that even
vaguely resembles a general expression syntax on the LHS is going to
run into this.

 and is it more
 interesting than new statements that we are going to reject on the grounds
 that we don't want to reserve any more plpgsql keywords?

Probably not, but my crystal ball isn't working too well today.

 2. The actual behavior would be the same as it would be for the case of
 implicit-CALL that we discussed upthread.  Namely, that it'd work just
 fine as long as foo isn't any unreserved keyword.  So the net effect
 would be that plpgsql keywords would be sort of semi-reserved, much like
 col_name_keywords in the core grammar: you can use 'em as variable names
 but not necessarily as function names.  With the current behavior where
 they're fully reserved, you can't use them as either, not even where the
 context is completely unambiguous.  I have not heard anyone complaining
 that col_name_keyword is a dumb idea and we should make all keywords fully
 reserved.

I see.  Well, that sounds fine, then.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] polymorphic types - enforce casting to most common type automatically

2014-11-24 Thread Pavel Stehule
Hello

now a functions with more than one polymorphic arguments are relative
fragile due missing casting to most common type. Some our functions like
coalesce can do it, so it is surprising for our users.

our custom polymorphic function foo(anyelement, anyelement) working well for

foo(10,20) or foo(10.1, 20.1), but not for foo(10, 20.1)

I am thinking, so we can add a searching most common type stage without
breaking to backing compatibility.

What do you think about it?

Regards

Pavel


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Josh Berkus
On 11/24/2014 09:24 AM, Jaime Casanova wrote:
 ... I don't honestly think we need a 4th method for promotion.

 
 this is not for promotion, this is to force postgres to start in
 recovery mode and read recovery configuration parameters.
 
 The main reason to want a we're in recovery file is for PITR rather
 than for replication, where it has a number of advantages as a method,
 the main one being that recovery.conf is unlikely to be overwritten by
 the contents of the backup.

 
 only that you need to start in recovery mode to start replication

Right, but my point is that having a trigger file *is not necessary for
replication, only for PITR* -- and maybe not necessary even for PITR.
That is, in a streaming replication configuration, having a
standby_mode = on|off parameter is 100% sufficient to control
replication with the small detail that pg_ctl promote needs to set it
in pg.auto.conf or conf.d.

And, now, having given it some thought, I'm going to argue that it's not
required for PITR either, provided that we can use the auto.conf method.

Before I go into my ideas, though, what does the current patch do
regarding non-replication PITR?

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling auto.conf WAS: Turning recovery.conf into GUCs

2014-11-24 Thread Josh Berkus
On 11/24/2014 07:29 AM, Stephen Frost wrote:
Sigh, here we go again.  I don't think you can disable 
postgresql.auto.conf
in the current code.  As I recall, the argument was that it's harder to
diagnose problems if postgresql.auto.conf takes effect in some system
but not others.
  
  I don't buy this at all.  What's going to be terribly confusing is to
  have config changes start happening for users who are managing their
  configs through a CM (which most should be..).  ALTER SYSTEM is going to
  cause more problems than it solves.

The main reason why disabling auto.conf was found not to be worthwhile
is that anyone with superuser rights can *already* change the config by
using ALTER DATABASE and ALTER ROLE, and that's a LOT less auditable
than pg.auto.conf is.  Heck, we don't even have a good system view for
SET parameters on DB objects.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] test_shm_mq failing on anole (was: Sending out a request for more buildfarm animals?)

2014-11-24 Thread Christoph Berg
Re: Robert Haas 2014-11-24 
ca+tgmoacnppmdgg4n14u2bjujndnmou8xxhhpmvo+0u92ck...@mail.gmail.com
  https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipselver=9.4~rc1-1stamp=1416547779
 
  mips had the problem as well in the past (9.4 beta3):
 
  https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-3stamp=1413607370
  https://buildd.debian.org/status/fetch.php?pkg=postgresql-9.4arch=mipsver=9.4~beta3-1stamp=1412893135
 
 For how long did was it hung before you killed it?

That was an automated build, killed after 300 minutes of inactivity.
(I guess that means it didn't output anything on the terminal for that
long, but I've never looked closer into sbuild.)

  The mips beta3 failures eventually went away when the build was done
  on a different machine. This was the first time the mipsel build was
  done on this build machine, so it seems the problem might well be
  caused by some subarchitecture difference.
 
 Does it fail every time when run on a machine where it fails sometimes?

So far there's a consistent host - fail-or-not mapping, but most
mips/mipsel build hosts have seen only one attempt so far which
actually came so far to actually run the shm_mq test.

 It might not be related to the subarchitecture difference in any
 particularly interesting way; it could just be a race condition that
 is triggered, or not, depending on the precise timing of things, which
 might vary based on subarchitecture, compiler, running kernel version,
 etc.

Compiler and kernel should mostly be the same on all hosts (gcc from
Debian unstable inside the build chroots and kernel from stable on the
host system). The common bit on the failing hosts is that both mips
and mipsel are registered as Broadcom BCM91250A aka SWARM which is
hinting at a subarch problem. But of course that could just be a
coincidence.

Atm I don't have access to the boxes where it was failing (the builds
succeed on the mips(el) porter hosts available to Debian developers).
I'll see if I can arrange access there and run a test.

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Robert Haas
On Mon, Nov 24, 2014 at 3:26 PM, Thom Brown t...@linux.com wrote:
 I haven't seen this mentioned anywhere (although it may have as I haven't
 read through the entire history of it), but would others find it useful to
 have ALTER SYSTEM support comments?

Oh, please no.

The main thing that caused us to have no way of modifying
postgresql.conf via SQL for so many years is that it's not clear how
you can sensibly rewrite a file with comments in it.  For example, the
default postgresql.conf file has stuff like this in it:

#variable = someval

If variable gets set to a non-default value, you might want to
uncomment that line, but now you have to parse the comments, which
will be tedious and error-prone and sometimes make stupid decisions:

#Back in days of yore when dinosaurs ruled the earth, we had
#autovacuum_naptime=1h, but that turned out to be a bad idea.
#
#autovacuum_naptime=1min

It's hard for a non-human to know that the second one is the one that
you should uncomment.  There are lots of other problems that arise,
too; this is just an example.

It would perhaps be OK to have comments in postgresql.conf.auto if
they were designated in some way that told us which specific comment
was associated with which specific setting.  But we need to be very
careful not to design something that requires us to write a parser
that can ferret out human intent from context clues.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Thom Brown
On 24 November 2014 at 21:04, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Nov 24, 2014 at 3:26 PM, Thom Brown t...@linux.com wrote:
  I haven't seen this mentioned anywhere (although it may have as I haven't
  read through the entire history of it), but would others find it useful
 to
  have ALTER SYSTEM support comments?

 Oh, please no.

 The main thing that caused us to have no way of modifying
 postgresql.conf via SQL for so many years is that it's not clear how
 you can sensibly rewrite a file with comments in it.  For example, the
 default postgresql.conf file has stuff like this in it:

 #variable = someval

 If variable gets set to a non-default value, you might want to
 uncomment that line, but now you have to parse the comments, which
 will be tedious and error-prone and sometimes make stupid decisions:

 #Back in days of yore when dinosaurs ruled the earth, we had
 #autovacuum_naptime=1h, but that turned out to be a bad idea.
 #
 #autovacuum_naptime=1min


I'm not sure this is an argument against supporting comments to
postgresql.auto.conf since it's specifically intended not to be edited by
humans, and commenting out a parameter will remove it from the file upon
further ALTER SYSTEM invocations anyway.

It would perhaps be OK to have comments in postgresql.conf.auto if
 they were designated in some way that told us which specific comment
 was associated with which specific setting.  But we need to be very
 careful not to design something that requires us to write a parser
 that can ferret out human intent from context clues.


Perhaps the parser could automatically remove any comment blocks which are
followed by a blank/empty line.

So if we had:

-
work_mem = '4MB'
# Set this lower as we're using a really fast disk interace
#seq_page_cost = '0.5'

# Set random_page_cost lower as we're using an SSD
random_page_cost = '1.0'
# This line has been manually added by a human without a newline
maintenance_work_mem = '1GB'

# This is an orphaned comment
-

I would expect the next modification to the file to cause reduce it to:

-
work_mem = '4MB'

# Set random_page_cost lower as we're using an SSD
random_page_cost = 1.0

# This line has been manually added
maintenance_work_mem = '1GB'
-

Thom


[HACKERS] CATUPDATE confusion?

2014-11-24 Thread Adam Brightwell
All,

While reviewing the supporting documentation and functions for role
attributes I noticed that there seems to be some confusion (at least for
me) with CATUPDATE.

This was prompted by the following comment from Peter about
'has_rolcatupdate' not having a superuser check.

http://www.postgresql.org/message-id/54590bbf.1080...@gmx.net

So, where I find this confusing is that the documentation supports this
functionality and the check keeps superuser separate from CATUPDATE...
however... comments and implementation in user.c state/do the opposite and
couple them together.

Documentation:
http://www.postgresql.org/docs/9.4/static/catalog-pg-authid.html - Role
can update system catalogs directly. (Even a superuser cannot do this
unless this column is true)

src/backend/commands/user.c

/* superuser gets catupdate right by default */
new_record[Anum_pg_authid_rolcatupdate - 1] = BoolGetDatum(issuper);

and...

/*
 * issuper/createrole/catupdate/etc
 *
 * XXX It's rather unclear how to handle catupdate.  It's probably best to
 * keep it equal to the superuser status, otherwise you could end up with
 * a situation where no existing superuser can alter the catalogs,
 * including pg_authid!
 */
if (issuper = 0)
{
  new_record[Anum_pg_authid_rolsuper - 1] = BoolGetDatum(issuper  0);
  new_record_repl[Anum_pg_authid_rolsuper - 1] = true;

  new_record[Anum_pg_authid_rolcatupdate - 1] = BoolGetDatum(issuper  0);
  new_record_repl[Anum_pg_authid_rolcatupdate - 1] = true;
}

Perhaps this is not an issue but it seemed odd to me, especially after
giving Peter's comment more thought.  So, I suppose the question is whether
or not a superuser check should be added to 'has_rolcatupdate' or not?  I
believe I understand the reasoning for coupling the two at role
creation/alter time, however, is there really a case where a superuser
wouldn't be allowed to bypass this check?  Based on the comments, there
seems like there is potentially enough concern to allow it.  And if it is
allowed, couldn't CATUPDATE then be treated like every other attribute and
the coupling with superuser removed?  Thoughts?

Thanks,
Adam

--
Adam Brightwell - adam.brightw...@crunchydatasolutions.com
Database Engineer - www.crunchydatasolutions.com


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Robert Haas
On Mon, Nov 24, 2014 at 4:22 PM, Thom Brown t...@linux.com wrote:
 Perhaps the parser could automatically remove any comment blocks which are
 followed by a blank/empty line.

Well, if we can agree on something, it doesn't bother me any.  I'm
just saying we spent years arguing about it, because we couldn't agree
on anything.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Stephen Frost
* Thom Brown (t...@linux.com) wrote:
 On 24 November 2014 at 20:40, Stephen Frost sfr...@snowman.net wrote:
  I will point out that this use of COMMENT is novel though, no?  Comments
  are normally handled as COMMENT ON blah IS 'whatever';  ALTER SYSTEM
  is certainly special but I'm not sure I like the idea of having some
  commands which support in-command COMMENT while others don't.
 
 I typed that out in my original email, thought about it, then removed it
 because I decided that perhaps it isn't the same class as comment as
 COMMENT ON uses.  That affects objects, whereas this would apply to
 individual config parameters within a file.  Also bear in mind that if
 someone runs:
 
 SHOW maintenance_work_mem;
 
 And sees 4GB, they may decide to add a comment based on that, even though
 the source of that setting isn't postgresql.auto.conf.

I'd be fine with supporting two distinct object type or perhaps one
object type and two sub types to allow those to be different (no clue
what the actual syntax would be..).

I'd actually prefer that these comments be represented in both
pg_description as well as being in the actual config file- otherwise how
are you going to be able to view what's there from the database before
you go and change it?  The fact that the information is also persisted
into the actual .auto.conf file is a convenience for admins who happen
to be looking there but I'd consider what's in pg_description to be the
canonical source.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
 On Mon, Nov 24, 2014 at 4:22 PM, Thom Brown t...@linux.com wrote:
  Perhaps the parser could automatically remove any comment blocks which are
  followed by a blank/empty line.
 
 Well, if we can agree on something, it doesn't bother me any.  I'm
 just saying we spent years arguing about it, because we couldn't agree
 on anything.

I'm mystified by this whole discussion.

For my 2c (and, yes, it's far too late to change most likely, but too
bad) is that the entirety of postgresql.auto.conf should be generated,
and generated with the catalog as the canonical source.  No one should
ever be modifying it directly and if they do then we act just as badly
as if they go hand-modifying heap files and screw it up.

We shouldn't have ever allowed postgresql.auto.conf to be the canonical
source of anything.  We didn't do that with pg_shadow back when we
required that and I don't see any good reason to that now.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alex Shulgin

Jaime Casanova ja...@2ndquadrant.com writes:

 Either way, from the code it is clear that we only stay in recovery if
 standby_mode is directly turned on.  This makes the whole check for a
 specially named file unnecessary, IMO: we should just check the value of
 standby_mode (which is off by default).

 no. currently we enter in recovery mode when postgres see a
 recovery.conf and stays in recovery mode when standby_mode is on or an
 appropiate restore_command is provided.

 which means recovery.conf has two uses:
 1) start in recovery mode (not continuous)
 2) provide parameters for recovery mode and for streaming

 we still need a recovery trigger file that forces postgres to start
 in recovery mode and acts accordingly to recovery GUCs

Yes, these were my latest findings also, but if instead of removing the
trigger file upon successful recovery the server would set
standby_mode=off, that would also work.

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alex Shulgin

Josh Berkus j...@agliodbs.com writes:
 
 only that you need to start in recovery mode to start replication

 Right, but my point is that having a trigger file *is not necessary for
 replication, only for PITR* -- and maybe not necessary even for PITR.
 That is, in a streaming replication configuration, having a
 standby_mode = on|off parameter is 100% sufficient to control
 replication with the small detail that pg_ctl promote needs to set it
 in pg.auto.conf or conf.d.

 And, now, having given it some thought, I'm going to argue that it's not
 required for PITR either, provided that we can use the auto.conf method.

 Before I go into my ideas, though, what does the current patch do
 regarding non-replication PITR?

It removes that $PGDATA/standby.enable trigger file it relies on to
start the PITR in the first place.

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Andres Freund
On 2014-11-24 12:02:52 -0800, Josh Berkus wrote:
 On 11/24/2014 09:24 AM, Jaime Casanova wrote:
  ... I don't honestly think we need a 4th method for promotion.
 
  
  this is not for promotion, this is to force postgres to start in
  recovery mode and read recovery configuration parameters.
  
  The main reason to want a we're in recovery file is for PITR rather
  than for replication, where it has a number of advantages as a method,
  the main one being that recovery.conf is unlikely to be overwritten by
  the contents of the backup.
 
  
  only that you need to start in recovery mode to start replication
 
 Right, but my point is that having a trigger file *is not necessary for
 replication, only for PITR* -- and maybe not necessary even for PITR.
 That is, in a streaming replication configuration, having a
 standby_mode = on|off parameter is 100% sufficient to control
 replication with the small detail that pg_ctl promote needs to set it
 in pg.auto.conf or conf.d.
 
 And, now, having given it some thought, I'm going to argue that it's not
 required for PITR either, provided that we can use the auto.conf method.
 
 Before I go into my ideas, though, what does the current patch do
 regarding non-replication PITR?

Guys. We aren't rereading the GUCs in the relevant places.  It's also
decidedly nontrivial to make standby_mode PGC_SIGHUP. Don't make this
patch more complex than it has to be. That's what stalled it the last
times round.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Josh Berkus
On 11/24/2014 02:00 PM, Alex Shulgin wrote:
 
 Josh Berkus j...@agliodbs.com writes:

 only that you need to start in recovery mode to start replication

 Right, but my point is that having a trigger file *is not necessary for
 replication, only for PITR* -- and maybe not necessary even for PITR.
 That is, in a streaming replication configuration, having a
 standby_mode = on|off parameter is 100% sufficient to control
 replication with the small detail that pg_ctl promote needs to set it
 in pg.auto.conf or conf.d.

 And, now, having given it some thought, I'm going to argue that it's not
 required for PITR either, provided that we can use the auto.conf method.

 Before I go into my ideas, though, what does the current patch do
 regarding non-replication PITR?
 
 It removes that $PGDATA/standby.enable trigger file it relies on to
 start the PITR in the first place.

OK, and that's required for replication too?  I'm OK with that if it
gets the patch in.

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Alex Shulgin

Josh Berkus j...@agliodbs.com writes:

 Before I go into my ideas, though, what does the current patch do
 regarding non-replication PITR?
 
 It removes that $PGDATA/standby.enable trigger file it relies on to
 start the PITR in the first place.

 OK, and that's required for replication too?  I'm OK with that if it
 gets the patch in.

In the current form of the patch, yes.  Thought I don't think I like it.

--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_background (and more parallelism infrastructure patches)

2014-11-24 Thread Robert Haas
On Thu, Nov 20, 2014 at 11:00 AM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Nov 20, 2014 at 7:30 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 Few compilation errors in the patch:
 1contrib\postgres_fdw\postgres_fdw.c(2107): error C2198:
 'set_config_option' : too few arguments for call
 1contrib\postgres_fdw\postgres_fdw.c(2111): error C2198:
 'set_config_option' : too few arguments for call
 1contrib\postgres_fdw\postgres_fdw.c(2115): error C2198:
 'set_config_option' : too few arguments for call
 2contrib\dblink\dblink.c(2983): error C2198: 'set_config_option' : too few
 arguments for call

 Oops.  Good catch.  Fixed in the attached version.

Committed.

Unfortunately, I forgot to credit you and Andres as reviewers; sorry about that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Turning recovery.conf into GUCs

2014-11-24 Thread Josh Berkus
On 11/24/2014 02:18 PM, Alex Shulgin wrote:
 
 Josh Berkus j...@agliodbs.com writes:

 Before I go into my ideas, though, what does the current patch do
 regarding non-replication PITR?

 It removes that $PGDATA/standby.enable trigger file it relies on to
 start the PITR in the first place.

 OK, and that's required for replication too?  I'm OK with that if it
 gets the patch in.
 
 In the current form of the patch, yes.  Thought I don't think I like it.

One step at a time.


-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Comment header for src/test/regress/regress.c

2014-11-24 Thread Ian Barwick
On 14/11/21 22:10, Heikki Linnakangas wrote:
 On 11/21/2014 06:23 AM, Ian Barwick wrote:
 I thought it might be useful to add a few words at the top
 of 'src/test/regress/regress.c' to explain what it does and
 to help differentiate it from 'pg_regress.c' and
 'pg_regress_main.c'.
 
 Makes sense, committed. I remember being a bit confused on that myself,
 when first reading the pg_regress code.

Thanks!

Ian Barwick

-- 
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2014-11-24 Thread Alvaro Herrera
 And here is v10 which fixes conflicts with Heikki's WAL API changes (no
 changes otherwise).

After some slight additional changes, here's v11, which I intend to
commit early tomorrow.  The main change is moving the test module from
contrib to src/test/modules.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services
*** a/contrib/pg_upgrade/pg_upgrade.c
--- b/contrib/pg_upgrade/pg_upgrade.c
***
*** 423,430  copy_clog_xlog_xid(void)
  	/* set the next transaction id and epoch of the new cluster */
  	prep_status(Setting next transaction ID and epoch for new cluster);
  	exec_prog(UTILITY_LOG_FILE, NULL, true,
! 			  \%s/pg_resetxlog\ -f -x %u \%s\,
! 			  new_cluster.bindir, old_cluster.controldata.chkpnt_nxtxid,
  			  new_cluster.pgdata);
  	exec_prog(UTILITY_LOG_FILE, NULL, true,
  			  \%s/pg_resetxlog\ -f -e %u \%s\,
--- 423,432 
  	/* set the next transaction id and epoch of the new cluster */
  	prep_status(Setting next transaction ID and epoch for new cluster);
  	exec_prog(UTILITY_LOG_FILE, NULL, true,
! 			  \%s/pg_resetxlog\ -f -x %u -c %u \%s\,
! 			  new_cluster.bindir,
! 			  old_cluster.controldata.chkpnt_nxtxid,
! 			  old_cluster.controldata.chkpnt_nxtxid,
  			  new_cluster.pgdata);
  	exec_prog(UTILITY_LOG_FILE, NULL, true,
  			  \%s/pg_resetxlog\ -f -e %u \%s\,
*** a/contrib/pg_xlogdump/rmgrdesc.c
--- b/contrib/pg_xlogdump/rmgrdesc.c
***
*** 10,15 
--- 10,16 
  
  #include access/brin_xlog.h
  #include access/clog.h
+ #include access/commit_ts.h
  #include access/gin.h
  #include access/gist_private.h
  #include access/hash.h
*** a/doc/src/sgml/config.sgml
--- b/doc/src/sgml/config.sgml
***
*** 2673,2678  include_dir 'conf.d'
--- 2673,2692 
/listitem
   /varlistentry
  
+  varlistentry id=guc-track-commit-timestamp xreflabel=track_commit_timestamp
+   termvarnametrack_commit_timestamp/varname (typebool/type)/term
+   indexterm
+primaryvarnametrack_commit_timestamp/ configuration parameter/primary
+   /indexterm
+   listitem
+para
+ Record commit time of transactions. This parameter
+ can only be set in filenamepostgresql.conf/ file or on the server
+ command line. The default value is literaloff/literal.
+/para
+   /listitem
+  /varlistentry
+ 
   /variablelist
  /sect2
  
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
***
*** 15923,15928  SELECT collation for ('foo' COLLATE de_DE);
--- 15923,15960 
  For example literal10:20:10,14,15/literal means
  literalxmin=10, xmax=20, xip_list=10, 14, 15/literal.
 /para
+ 
+para
+ The functions shown in xref linkend=functions-committs
+ provide information about transactions that have been already committed.
+ These functions mainly provide information about when the transactions
+ were committed. They only provide useful data when
+ xref linkend=guc-track-commit-timestamp configuration option is enabled
+ and only for transactions that were committed after it was enabled.
+/para
+ 
+table id=functions-committs
+ titleCommitted transaction information/title
+ tgroup cols=3
+  thead
+   rowentryName/entry entryReturn Type/entry entryDescription/entry/row
+  /thead
+ 
+  tbody
+   row
+entryliteralfunctionpg_xact_commit_timestamp(parameterxid/parameter)/function/literal/entry
+entrytypetimestamp with time zone/type/entry
+entryget commit timestamp of a transaction/entry
+   /row
+   row
+entryliteralfunctionpg_last_committed_xact()/function/literal/entry
+entryparameterxid/ typexid/, parametertimestamp/ typetimestamp with time zone//entry
+entryget transaction Id and commit timestamp of latest transaction commit/entry
+   /row
+  /tbody
+ /tgroup
+/table
+ 
/sect1
  
sect1 id=functions-admin
*** a/doc/src/sgml/ref/pg_resetxlog.sgml
--- b/doc/src/sgml/ref/pg_resetxlog.sgml
***
*** 22,27  PostgreSQL documentation
--- 22,28 
   refsynopsisdiv
cmdsynopsis
 commandpg_resetxlog/command
+arg choice=optoption-c/option replaceable class=parameterxid/replaceable/arg
 arg choice=optoption-f/option/arg
 arg choice=optoption-n/option/arg
 arg choice=optoption-o/option replaceable class=parameteroid/replaceable/arg
***
*** 77,88  PostgreSQL documentation
/para
  
para
!The option-o/, option-x/, option-e/,
!option-m/, option-O/,
!and option-l/
 options allow the next OID, next transaction ID, next transaction ID's
!epoch, next and oldest multitransaction ID, next multitransaction offset, and WAL
!starting address values to be set manually.  These are only needed when
 commandpg_resetxlog/command is unable to determine appropriate values
 by reading 

Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-11-24 Thread Andres Freund
On 2014-11-24 15:39:22 -0500, Adam Brightwell wrote:
 * int64 (C) to int8 (SQL) mapping for genbki.

That definitely should be a separate patch. Which can be committed much
earlier than the rest - even if we don't actually end up needing it for
this feature, it's still good to have it.

 * replace all role attributes columns in pg_authid with single int64 column
 named rolattr.
 * update CreateRole and AlterRole to use rolattr.
 * update all has_*_privilege functions to check rolattr.
 * builtin SQL function 'has_role_attribute' that takes a role oid and text
 name of the attribute as input and returns a boolean.

I think if we're going to do this - and I'm not yet convinced that
that's the best route, we should add returns all permissions a user
has. Right now that's quite easily queryable, but it won't be after
moving everything into one column. You'd need to manually use all has_*_
functions... Yes, you've added them already to pg_roles, but there's
sometimes good reasons to go to pg_authid instead.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Role Attribute Bitmask Catalog Representation

2014-11-24 Thread David G Johnston
Adam Brightwell wrote
 A few related threads/discussions/posts:
 
 * http://www.postgresql.org/message-id/

 20141016115914.GQ28859@.snowman

 *
 http://www.postgresql.org/message-id/CA+TgmobkYXNOWKEKzX2qGPSr_nvacFGueV=

 orxND-xmZvOVYvg@.gmail

 * http://www.postgresql.org/message-id/

 20141016115914.GQ28859@.snowman


FYI: the first and third links are the same...was there another one you
meant to provide instead?

David J.






--
View this message in context: 
http://postgresql.nabble.com/Role-Attribute-Bitmask-Catalog-Representation-tp5828078p5828106.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] TODO item: Accept aliases for values in ROW(...) constructor

2014-11-24 Thread Craig Ringer
Hi all

Especially with the introduction of json support, but also in the past
with hstore and other things, I've sometimes found myself wishing I
could provide aliases in an anonymous row constructor, e.g.

ROW(x AS something, y AS somethingelse)

The same thing can be done using a scalar subquery wrapping a
subquery-in-FROM returning a single row, but it's pretty ugly:

(SELECT r FROM (SELECT x AS something, y AS somethingelse) r)

That's what I've done to produce json a lot, though, and will need to
continue to do so until 9.4's json_build_object etc are in the wild.

While that'll solve the need for json, I'm sure others will come up. So
in case someone feels like exploring the parser a little, does it seem
reasonable to add ROW(...) with aliases to the TODO?

Or, alternately, and perhaps more generally useful, allow rowtype
specifications for anonymous records outside function-call context, like:

  ROW(x1,y1) AS r(x integer, y integer)




Related:

http://stackoverflow.com/q/13227142/398670

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] TODO item: Accept aliases for values in ROW(...) constructor

2014-11-24 Thread Craig Ringer

 ROW(x AS something, y AS somethingelse)

Apologies, it looks like Pavel already bought this up:

http://www.postgresql.org/message-id/cafj8prb1t1w6g0sppn-jetyzjpluuz_fxtnbme5okd3xxvf...@mail.gmail.com

and I missed it.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw behaves oddly

2014-11-24 Thread Etsuro Fujita
(2014/11/23 6:16), Tom Lane wrote:
 I committed this with some cosmetic adjustments, and one not-so-cosmetic
 one:

Thanks for improving and committing the patch!

Best regards,
Etsuro Fujita


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] New wal format distorts pg_xlogdump --stats

2014-11-24 Thread Andres Freund
Hi,

The new WAL format calculates FPI vs plain record data like:
rec_len = XLogRecGetDataLen(record) + SizeOfXLogRecord;
fpi_len = record-decoded_record-xl_tot_len - rec_len;

Due to the amount of data now handled outside the main data portion ,
that doesn't seem correct to me. As an example, a couple thousand
inserts with full_page_writes=off now yields:

Type   N  (%)  Record size  
(%) FPI size  (%)Combined size  (%)
   -  ---  ---  
---   ----  ---
Heap/INSERT30167 ( 99.53)   814509 
( 99.50)   965856 ( 99.54)  1780365 ( 99.52)

I think fpi_len now needs to only count the sum of the of the actual
length of block images, and all the rest needs to be rec_len.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgresql.auto.conf comments

2014-11-24 Thread Amit Kapila
On Tue, Nov 25, 2014 at 1:56 AM, Thom Brown t...@linux.com wrote:

 Hi,

 I haven't seen this mentioned anywhere (although it may have as I haven't
read through the entire history of it), but would others find it useful to
have ALTER SYSTEM support comments?

 e.g.

 ALTER SYSTEM SET autovacuum_vacuum_scale_factor = 0.01
 WITH [ INLINE | HEADLINE ] COMMENT $As most of the tables on the system
are so big, we're setting this parameter lower than the default to keep
bloat under more control.$;

 I just made up inline and headline to suggest that we could allow either:

 autovacuum_vacuum_scale_factor = 0.01 # As most of the tables...

 and

 # As most of the tables on the system are so big, we're setting this
parameter
 # lower than the default to keep bloat under more control.
 autovacuum_vacuum_scale_factor = 0.01


I think the biggest hurdle to achieve this is to enhance/implement
the parser for it, right now the function to parse config file
(ParseConfigFp()) gives us the list of name-value pair item's.  I think
if we could improve it or write a different function which can return
name-value-comment item's, then it won't be too difficult for
Alter System to support it.


 The rationale being that it's often the case one wants to document the
reason for a parameter being configured so, but there's no way of doing
this for settings in postgresql.auto.conf as they'll be wiped out if added
manually.


I think this is completely genuine request and I have seen that other
systems which support Alter System does support comments along
with each entry.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] The problems of PQhost()

2014-11-24 Thread Noah Misch
On Wed, Jan 22, 2014 at 11:48:26PM +0900, Fujii Masao wrote:
 (3) PQhost() cannot return the hostaddr.

 We can fix the problem (3) by changing PQhost() so that it also
 returns the hostaddr. But this change might break the existing
 application using PQhost(). So, I added new libpq function PQhostaddr()
 which returns the hostaddr, and changed \conninfo so that it reports
 correct connection information by using both PQhost() and PQhostaddr().

 + varlistentry id=libpq-pqhostaddr
 +  term
 +   functionPQhostaddr/function
 +   indexterm
 +primaryPQhostaddr/primary
 +   /indexterm
 +  /term
 + 
 +  listitem
 +   para
 +Returns the server numeric IP address or host name of the connection.
 + synopsis
 + char *PQhostaddr(const PGconn *conn);
 + /synopsis
 +   /para
 +  /listitem
 + /varlistentry

From reading this documentation, I assumed this function would return a
non-empty value for every TCP connection.  After all, every TCP connection has
a peer IP address.  In fact, PQhostaddr() returns the raw value of the
hostaddr connection parameter, whether from a libpq function argument or
from the PGHOSTADDR environment variable.  (If the parameter and environment
variable are absent, it returns NULL.  Adding hostaddr= to the connection
string makes it return the empty string.)  A caller wanting the specific raw
value of a parameter could already use PQconninfo().  I suspect this new
function will confuse more than help.  What do you think of reverting it and
having \conninfo use PQconninfo() to discover any hostaddr value?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] no test programs in contrib

2014-11-24 Thread Noah Misch
On Mon, Nov 24, 2014 at 10:49:45AM -0300, Alvaro Herrera wrote:
 What's the general opinion on having test programs somewhere other than
 contrib/ ?

General opinion: slightly favorable.

 We currently have a number of subdirectories for test-only programs:
 
 test_parser (a toy text search parser, added in 2007)
 dummy_seclabel (for SECURITY LABEL regression testing, added Sept 2010)
 worker_spi (for bgworkers, added Dec 2012)
 test_shm_mq (test program for shared memory queues, added Jan 2014)
 test_decoding (test program for logical decoding, added March 2014)

 What would you say if we were to move them to src/test/?  I could also
 see putting them in a brand new top-level directory, say testing/ or
 testprg/.

It's revealing that two of the first three responses each doubted the fit of
one of those moves.  I think that shows the lines aren't so bright after all,
and this specific proposal is not strong enough.  The line between a test
module and a sample-code module is blurry.

 Now, I know there is some resistance to the idea of moving source code
 around.  If this proposal is objected to, would people object the idea
 of putting the commit timestamp test module in src/test/commit_ts
 instead of the patch author's proposal, contrib/test_committs?

I'd rather defend moving source code or defend continuing to dump in contrib
than defend a src/test/modules defined as test-oriented modules added after
November 2014.

Incidentally, +1 on test_commit_ts in preference to test_committs.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] A possbile typo in src/bin/pg_dump.c

2014-11-24 Thread Amit Langote

Hi,

I found in pg_dump.c what I believe to be a typo.

-* relationships are set up by doing ALTER INHERIT rather than
using
+* relationships are set up by doing ALTER TABLE INHERIT
rather than using

Attached fixes this if appropriate to do so.

Thanks,
Amit


20141125-bin-pg_dump-typo.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-11-24 Thread Michael Paquier
On Thu, Nov 13, 2014 at 12:15 AM, Andres Freund and...@2ndquadrant.com wrote:

 On 2014-11-12 10:13:18 -0500, Robert Haas wrote:
  On Tue, Nov 11, 2014 at 4:27 AM, Andres Freund and...@2ndquadrant.com 
  wrote:
   The more important thing here is that I see little chance of this
   getting in before Heikki's larger rework of the wal format gets
   in. Since that'll change everything around anyay I'm unsure how much
   point there is to iterate till that's done. I know that sucks, but I
   don't see much of an alternative.
 
  Why not do this first?  Heikki's patch seems quite far from being
  ready to commit at this point - it significantly increases WAL volume
  and reduces performance.  Heikki may well be able to fix that, but I
  don't know that it's a good idea to make everyone else wait while he
  does.

 Because it imo builds the infrastructure to do the compression more
 sanely. I.e. provide proper space to store information about the
 compressedness of the blocks and such.

Now that the new WAL format has been committed, here are some comments
about this patch and what we can do. First, in xlogrecord.h there is a
short description of how a record looks like. The portion of a block
data looks like that for a given block ID:
1) block image if BKPBLOCK_HAS_IMAGE, whose size of BLCKSZ - hole
2) data related to the block if BKPBLOCK_HAS_DATA, with a size
determined by what the caller inserts with XLogRegisterBufData for a
given block.
The data associated with a block has a length that cannot be
determined before XLogRegisterBufData is used. We could add a 3rd
parameter to XLogEnsureRecordSpace to allocate enough space for a
buffer wide enough to allocate data for a single buffer before
compression (BLCKSZ * number of blocks + total size of block data) but
this seems really error-prone for new features as well as existing
features. So for those reasons I think that it would be wise to not
include the block data in what is compressed.

This brings me to the second point: we would need to reorder the
entries in the record chain if we are going to do the compression of
all the blocks inside a single buffer, it has the following
advantages:
- More compression, as proved with measurements on this thread
And the following disadvantages:
- Need to change the entries in record chain once again for this
release to something like that for the block data (note that current
record chain format is quite elegant btw):
compressed block images
block data of ID = M
block data of ID = N
etc.
- Slightly longer replay time, because we would need to loop two times
through the block data to fill in DecodedBkpBlock: once to decompress
all the blocks, and once for the data of each block. It is not much
because there are not that many blocks replayed per record, but still.

So, all those things gathered, with a couple of hours hacking this
code, make me think that it would be more elegant to do the
compression per block and not per group of blocks in a single record.
I actually found a couple of extra things:
- pg_lzcompress and pg_lzdecompress should be in src/port to make
pg_xlogdump work. Note that pg_lzdecompress has one call to elog,
hence it would be better to have it return a boolean state and let the
caller return an error of decompression failed.
- In the previous patch versions, a WAL record was doing unnecessary
processing: first it built uncompressed image block entries, then
compressed them, and replaced in the record chain the existing
uncompressed records by the compressed ones.
- CompressBackupBlocks enforced compression to BLCKSZ, which was
incorrect for groups of blocks, it should have been BLCKSZ *
num_blocks.
- it looks to be better to add a simple uint16 in
XLogRecordBlockImageHeader to store the compressed length of a block,
if 0 the block is not compressed. This helps the new decoder facility
to track the length of data received. If a block has a hole, it is
compressed without it.

Now here are two patches:
- Move pg_lzcompress.c to src/port to make pg_xlogdump work with the
2nd patch. I imagine that this would be useful as well for client
utilities, similarly to what has been done for pg_crc some time ago.
- The patch itself doing the FPW compression, note that it passes
regression tests but at replay there is still one bug, triggered
roughly before numeric.sql when replaying changes on a standby. I am
still looking at it, but it does not prevent basic testing as well as
a continuation of the discussion.
For now here are the patches either way, so feel free to comment.
Regards,
-- 
Michael
From eda0730d991f8b4dfbacc4d7a953ec5bff8b2ffe Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Fri, 21 Nov 2014 13:40:11 +0900
Subject: [PATCH 1/2] Fix flag marking GIN index as being built for new entries

This was somewhat missing in the current implementation, and leaded
to problems for code that needed special handling with fresh indexes
being built. Note that this does not impact current 

Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes

2014-11-24 Thread Michael Paquier
On Tue, Nov 25, 2014 at 3:33 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
 For now here are the patches either way, so feel free to comment.
And of course the patches are incorrect...
-- 
Michael
From f0f4f8789c774d6b6fe69e66df0efffb63a9de52 Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Tue, 25 Nov 2014 14:05:59 +0900
Subject: [PATCH 1/2] Move pg_lzcompress.c to src/port

Exposing compression and decompression APIs of pglz makes possible its
use by extensions and contrib modules. pglz_decompress contained a call
to elog to emit an error message in case of corrupted data. This function
is changed to return a boolean status to let its callers return an error
instead.
---
 src/backend/access/heap/tuptoaster.c  |   9 +-
 src/backend/utils/adt/Makefile|   4 +-
 src/backend/utils/adt/pg_lzcompress.c | 779 -
 src/include/utils/pg_lzcompress.h |   2 +-
 src/port/Makefile |   4 +-
 src/port/pg_lzcompress.c  | 781 ++
 src/tools/msvc/Mkvcbuild.pm   |   4 +-
 7 files changed, 794 insertions(+), 789 deletions(-)
 delete mode 100644 src/backend/utils/adt/pg_lzcompress.c
 create mode 100644 src/port/pg_lzcompress.c

diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index ce44bbd..48b5d38 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -142,7 +142,8 @@ heap_tuple_untoast_attr(struct varlena * attr)
 
 			attr = (struct varlena *) palloc(PGLZ_RAW_SIZE(tmp) + VARHDRSZ);
 			SET_VARSIZE(attr, PGLZ_RAW_SIZE(tmp) + VARHDRSZ);
-			pglz_decompress(tmp, VARDATA(attr));
+			if (!pglz_decompress(tmp, VARDATA(attr)))
+elog(ERROR, compressed data is corrupted);
 			pfree(tmp);
 		}
 	}
@@ -167,7 +168,8 @@ heap_tuple_untoast_attr(struct varlena * attr)
 
 		attr = (struct varlena *) palloc(PGLZ_RAW_SIZE(tmp) + VARHDRSZ);
 		SET_VARSIZE(attr, PGLZ_RAW_SIZE(tmp) + VARHDRSZ);
-		pglz_decompress(tmp, VARDATA(attr));
+		if (!pglz_decompress(tmp, VARDATA(attr)))
+			elog(ERROR, compressed data is corrupted);
 	}
 	else if (VARATT_IS_SHORT(attr))
 	{
@@ -239,7 +241,8 @@ heap_tuple_untoast_attr_slice(struct varlena * attr,
 
 		preslice = (struct varlena *) palloc(size);
 		SET_VARSIZE(preslice, size);
-		pglz_decompress(tmp, VARDATA(preslice));
+		if (!pglz_decompress(tmp, VARDATA(preslice)))
+			elog(ERROR, compressed data is corrupted);
 
 		if (tmp != (PGLZ_Header *) attr)
 			pfree(tmp);
diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile
index 3ea9bf4..20e5ff1 100644
--- a/src/backend/utils/adt/Makefile
+++ b/src/backend/utils/adt/Makefile
@@ -25,8 +25,8 @@ OBJS = acl.o arrayfuncs.o array_selfuncs.o array_typanalyze.o \
 	jsonfuncs.o like.o lockfuncs.o mac.o misc.o nabstime.o name.o \
 	network.o network_gist.o network_selfuncs.o \
 	numeric.o numutils.o oid.o oracle_compat.o \
-	orderedsetaggs.o pg_lzcompress.o pg_locale.o pg_lsn.o \
-	pgstatfuncs.o pseudotypes.o quote.o rangetypes.o rangetypes_gist.o \
+	orderedsetaggs.o pg_locale.o pg_lsn.o pgstatfuncs.o \
+	pseudotypes.o quote.o rangetypes.o rangetypes_gist.o \
 	rangetypes_selfuncs.o rangetypes_spgist.o rangetypes_typanalyze.o \
 	regexp.o regproc.o ri_triggers.o rowtypes.o ruleutils.o \
 	selfuncs.o tid.o timestamp.o trigfuncs.o \
diff --git a/src/backend/utils/adt/pg_lzcompress.c b/src/backend/utils/adt/pg_lzcompress.c
deleted file mode 100644
index fe08890..000
--- a/src/backend/utils/adt/pg_lzcompress.c
+++ /dev/null
@@ -1,779 +0,0 @@
-/* --
- * pg_lzcompress.c -
- *
- *		This is an implementation of LZ compression for PostgreSQL.
- *		It uses a simple history table and generates 2-3 byte tags
- *		capable of backward copy information for 3-273 bytes with
- *		a max offset of 4095.
- *
- *		Entry routines:
- *
- *			bool
- *			pglz_compress(const char *source, int32 slen, PGLZ_Header *dest,
- *		  const PGLZ_Strategy *strategy);
- *
- *source is the input data to be compressed.
- *
- *slen is the length of the input data.
- *
- *dest is the output area for the compressed result.
- *	It must be at least as big as PGLZ_MAX_OUTPUT(slen).
- *
- *strategy is a pointer to some information controlling
- *	the compression algorithm. If NULL, the compiled
- *	in default strategy is used.
- *
- *The return value is TRUE if compression succeeded,
- *FALSE if not; in the latter case the contents of dest
- *are undefined.
- *
- *			void
- *			pglz_decompress(const PGLZ_Header *source, char *dest)
- *
- *source is the compressed input.
- *
- *dest is the area where the uncompressed data will be
- *	written to. It is the callers responsibility to
- *	provide enough space. The required amount can be
- *	obtained with the macro PGLZ_RAW_SIZE(source).
- *
- *	The data is written to buff exactly as it was handed
- *	to pglz_compress(). No terminating 

Re: [HACKERS] Replication connection URI?

2014-11-24 Thread Alex Shulgin

Alex Shulgin a...@commandprompt.com writes:

 Heikki Linnakangas hlinnakan...@vmware.com writes:

 It appears that replication connection doesn't support URI but only the
 traditional conninfo string.

 src/backend/replication/libpqwalreceiver/libpqwalreceiver.c:99: in 
 libpqrcv_connect():

  snprintf(conninfo_repl, sizeof(conninfo_repl),
   %s dbname=replication replication=true 
 fallback_application_name=walreceiver,
   conninfo);

 A patch to fix this welcome?

 Yeah, seems like an oversight. Hopefully you can fix that without
 teaching libpqwalreceiver what connection URIs look like..

 Please see attached.  We're lucky that PQconnectdbParams has an option
 to parse and expand the first dbname parameter if it looks like a
 connection string (or a URI).

 The first patch is not on topic, I just spotted this missing check.

 The second one is a self-contained fix, but the third one which is the
 actual patch depends on the second one, because it specifies the dbname
 keyword two times: first to parse the conninfo/URI, then to override any
 dbname provided by the user with replication pseudo-database name.

These patches are really simple, I hope a committer will pick them up?
Or should I add them to the commitfest?

Also, I'd rather get this committed first, then rebase that
recovery.conf-GUC patch onto it and submit an updated version.

Thanks.
--
Alex


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers