Re: [HACKERS] Typo in function header for recently added function errhidecontext

2015-01-04 Thread Fujii Masao
On Mon, Jan 5, 2015 at 3:19 PM, Amit Kapila amit.kapil...@gmail.com wrote:
 /*
  * errhidestmt --- optionally suppress CONTEXT: field of log entry
  *
  * This should only be used for verbose debugging messages where the
 repeated
  * inclusion of CONTEXT: bloats the log volume too much.
  */
 int
 errhidecontext(bool hide_ctx)


 Here in function header, function name should be
 errhidecontext.

Fixed. Thanks!

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2015-01-04 Thread Craig Ringer
On 12/19/2014 02:53 PM, Noah Misch wrote:
 The test assumed that no two transactions of a given backend will get the same
 timestamp value from now().  That holds so long as ticks of the system time
 are small enough.  Not so on at least some Windows configurations.

Most Windows systems with nothing else running will have 15 ms timer
granularity. So multiple timestamps allocated within the same
millisecond will have the same value for timestamps captured within that
interval.

If you're running other programs that use the multimedia timer APIs
(including Google Chrome, MS SQL Server, and all sorts of other apps you
might not expect) you'll probably have 1ms timer granularity instead.

Since PostgreSQL 9.4 and below capture time on Windows using
GetSystemTime the sub-millisecond part is lost anyway. On 9.5 it's
retained but will usually be some fixed value because the timer tick is
still 1ms.

If you're on Windows 8 or Windows 2012 and running PostgreSQL 9.5
(master), but not earlier versions, you'll get sub-microsecond
resolution like on sensible platforms.

Some details here: https://github.com/2ndQuadrant/pg_sysdatetime

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] addRangeTableEntry() relies on pstate, contrary to its documentation

2015-01-04 Thread Andres Freund
Hi,

Since at least 61e532820824504aa92ad93c427722d3fa9c1632 from 2009,
addRangeTableEntry() relies pstate being != NULL via its call to
isLockedRefname() even though its documentation says:
 * If pstate is NULL, we just build an RTE and return it without adding it
 * to an rtable list.

I think we should just remove the above sentence and code supporting it
from addRangeTableEntry* and add asserts ensuring its passed in.

Off list Tom commented that suggestion with:
 NAK.  I'm absolutely certain that there is, or at least once was, code
 that relied on that feature.  Maybe not for addRangeTableEntry itself,
 but for at least one of its siblings.

Yea, there had to be, for the code to be written that way. I'm not
exactly an expert in that area of the code, and lots of it predates my
involvement in the project...

 Before removing the feature I'd
 want to see a trace-down of where that usage went away and an analysis
 of why the need for it won't come back.

Ok. I've only cursorily checked callers. The number of callchains to all
of them make it hard to verify it conclusively :(

 An easy alternative fix, of course, is to not call isLockedRefname if
 we don't have a pstate (or else put the pstate==NULL test inside it).

I'm not a big fan of that - won't that essentially cause the wrong
locklevel to be used and thus open the door for lock upgrade deadlocks?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] event trigger test exception message

2015-01-04 Thread Andrew Dunstan


I don't wish to seem humorless, but I think this should probably be changed:

root/HEAD/pgsql/src/test/regress/sql/event_trigger.sql:248:  RAISE 
EXCEPTION 'I''m sorry Sir, No Rewrite Allowed.';


Quite apart from any other reason, the Sir does seem a bit sexist - we 
have no idea of the gender of the reader. Probably just 'sorry, no 
rewrite allowed' would suffice.


(Noticed while looking at buildfarm failures.)

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] addRangeTableEntry() relies on pstate, contrary to its documentation

2015-01-04 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Off list Tom commented that suggestion with:
 An easy alternative fix, of course, is to not call isLockedRefname if
 we don't have a pstate (or else put the pstate==NULL test inside it).

 I'm not a big fan of that - won't that essentially cause the wrong
 locklevel to be used and thus open the door for lock upgrade deadlocks?

Well, it would amount to assuming that the table was not mentioned in
FOR UPDATE.  Depending on context, that might be perfectly appropriate.

A quick grep finds these places that are visibly passing NULL to one or
another addRangeTableEntry* function:

convert_ANY_sublink_to_join(): pulls up an ANY subquery with

rte = addRangeTableEntryForSubquery(NULL, ...

UpdateRangeTableOfViewParse(): inserts NEW/OLD RTEs using

rt_entry1 = addRangeTableEntryForRelation(NULL, viewRel,
  makeAlias(old, NIL),
  false, false);
rt_entry2 = addRangeTableEntryForRelation(NULL, viewRel,
  makeAlias(new, NIL),
  false, false);

So you would certainly break these callers.  I'm not sure whether any of
the callers that are passing down their own pstate arguments can ever be
passed a NULL; I'm inclined to doubt it though.

An alternative of course is to not have this API spec for all
addRangeTableEntry* functions, but just the two used this way.
I don't much care for that though.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: addRangeTableEntry() relies on pstate, contrary to its documentation

2015-01-04 Thread Andres Freund
On 2015-01-04 12:33:34 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  Off list Tom commented that suggestion with:
  An easy alternative fix, of course, is to not call isLockedRefname if
  we don't have a pstate (or else put the pstate==NULL test inside it).
 
  I'm not a big fan of that - won't that essentially cause the wrong
  locklevel to be used and thus open the door for lock upgrade deadlocks?
 
 Well, it would amount to assuming that the table was not mentioned in
 FOR UPDATE.  Depending on context, that might be perfectly appropriate.

Yea. Given that there's apparently (given no reports of crashes in the
last couple years) not even indirect callers it's a bit hard to say
;). Given that it seems to be the easiest way to handle this, even
though it's not a nice fix.

 A quick grep finds these places that are visibly passing NULL to one or
 another addRangeTableEntry* function:
 
 convert_ANY_sublink_to_join(): pulls up an ANY subquery with
 
 rte = addRangeTableEntryForSubquery(NULL, ...
 
 UpdateRangeTableOfViewParse(): inserts NEW/OLD RTEs using
 
 rt_entry1 = addRangeTableEntryForRelation(NULL, viewRel,
   makeAlias(old, NIL),
   false, false);
 rt_entry2 = addRangeTableEntryForRelation(NULL, viewRel,
   makeAlias(new, NIL),
   false, false);

Yea, found those as well by now... There used to be a some more in the
past, but never many afaics.

 An alternative of course is to not have this API spec for all
 addRangeTableEntry* functions, but just the two used this way.
 I don't much care for that though.

Yea :(. And creating a faux pstate for the above callers isn't
particularly nice either.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: Problems with approach #2 to value locking (INSERT ... ON CONFLICT UPDATE/IGNORE patch)

2015-01-04 Thread Peter Geoghegan
On Sat, Jan 3, 2015 at 10:16 PM, Peter Geoghegan p...@heroku.com wrote:
 I looked at the code in more detail, and realized that there were old
 bugs in the exclusion constraint related modifications. I attach a
 delta patch that fixes them. This is a combined patch that is all that
 is needed to apply on top of v1.8.vallock2.tar.gz [1] to have all
 available bugfixes.

I've updated Jeff Janes' test suite to support testing of exclusion
constraints that are equivalent to unique indexes:

https://github.com/petergeoghegan/jjanes_upsert/commit/a941f423e9500b847b1a9d1805ba52cb11db0ae9

(This requires a quick hack to the Postgres source code to accept
exclusion constraints as ON CONFLICT UPDATE arbiters).

So far, everything seems okay with exclusion constraints, as far as I
can determine using the stress tests that we have. This is an
encouraging sign.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2015-01-04 Thread Andrew Dunstan


On 12/15/2014 12:04 PM, Andres Freund wrote:


I think the safest fix would be to defer catchup interrupt processing
while you're in this mode.  You don't really want to be processing any
remote sinval messages at all, I'd think.

Well, we need to do relmap, smgr and similar things. So I think that'd
be more complicated than we want.





Where are we on this? Traffic seems to have gone quite but we still have 
a bunch of buildfarm animals red.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Something is broken in logical decoding with CLOBBER_CACHE_ALWAYS

2015-01-04 Thread Andres Freund
On January 4, 2015 9:51:43 PM CET, Andrew Dunstan and...@dunslane.net wrote:

On 12/15/2014 12:04 PM, Andres Freund wrote:

 I think the safest fix would be to defer catchup interrupt
processing
 while you're in this mode.  You don't really want to be processing
any
 remote sinval messages at all, I'd think.
 Well, we need to do relmap, smgr and similar things. So I think
that'd
 be more complicated than we want.




Where are we on this? Traffic seems to have gone quite but we still
have 
a bunch of buildfarm animals red.

I've a simple fix (similar too what I iriginally outkined) which I plan to post 
soonish. I've tried a bunch of things roughly in the vein of Tom's suggestions, 
but they all are more invasive and still incomplete.

Andres

-- 
Please excuse brevity and formatting - I am writing this on my mobile phone.

Andres Freund  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Redesigning checkpoint_segments

2015-01-04 Thread Josh Berkus
On 01/03/2015 12:56 AM, Heikki Linnakangas wrote:
 On 01/03/2015 12:28 AM, Josh Berkus wrote:
 On 01/02/2015 01:57 AM, Heikki Linnakangas wrote:
 wal_keep_segments does not affect the calculation of CheckPointSegments.
 If you set wal_keep_segments high enough, checkpoint_wal_size will be
 exceeded. The other alternative would be to force a checkpoint earlier,
 i.e. lower CheckPointSegments, so that checkpoint_wal_size would be
 honored. However, if you set wal_keep_segments high enough, higher than
 checkpoint_wal_size, it's impossible to honor checkpoint_wal_size no
 matter how frequently you checkpoint.

 So you're saying that wal_keep_segments is part of the max_wal_size
 total, NOT in addition to it?
 
 Not sure what you mean. wal_keep_segments is an extra control that can
 prevent WAL segments from being recycled. It has the same effect as
 archive_command failing for N most recent segments, if that helps.

I mean, if I have these settings:

max_wal_size* = 256MB
wal_keep_segments = 8

... then my max wal size is *still* 256MB, NOT 384MB?

If that's the case (and I think it's a good plan), then as a follow-on,
we should prevent users from setting wal_keep_segments to more than 50%
of max_wal_size, no?

(* max_wal_size == checkpoint_wal_size, per prior email)

-- 
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] event trigger test exception message

2015-01-04 Thread Fabrízio de Royes Mello
Em domingo, 4 de janeiro de 2015, Andrew Dunstan and...@dunslane.net
escreveu:


 I don't wish to seem humorless, but I think this should probably be
 changed:

 root/HEAD/pgsql/src/test/regress/sql/event_trigger.sql:248:  RAISE
 EXCEPTION 'I''m sorry Sir, No Rewrite Allowed.';

 Quite apart from any other reason, the Sir does seem a bit sexist - we
 have no idea of the gender of the reader. Probably just 'sorry, no rewrite
 allowed' would suffice.


+1




-- 
Fabrízio de Royes Mello
Consultoria/Coaching PostgreSQL
 Timbira: http://www.timbira.com.br
 Blog: http://fabriziomello.github.io
 Linkedin: http://br.linkedin.com/in/fabriziomello
 Twitter: http://twitter.com/fabriziomello
 Github: http://github.com/fabriziomello


Re: [HACKERS] event trigger test exception message

2015-01-04 Thread Alvaro Herrera
Andrew Dunstan wrote:
 
 I don't wish to seem humorless, but I think this should probably be changed:
 
 root/HEAD/pgsql/src/test/regress/sql/event_trigger.sql:248:  RAISE EXCEPTION
 'I''m sorry Sir, No Rewrite Allowed.';
 
 Quite apart from any other reason, the Sir does seem a bit sexist - we
 have no idea of the gender of the reader. Probably just 'sorry, no rewrite
 allowed' would suffice.

This seems pointless tinkering to me.  Should I start introducing female
pronouns in test error messages, to measure how much this will annoy my
male conterparts?  This is not a user visible message in any case.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] orangutan seizes up during isolation-check

2015-01-04 Thread Noah Misch
On Mon, Jan 05, 2015 at 02:25:09PM +0900, Michael Paquier wrote:
 On Fri, Jan 2, 2015 at 1:04 PM, Noah Misch n...@leadboat.com wrote:
  The first attached patch, for all branches, adds LOG-level messages and an
  assertion.  So cassert builds will fail hard, while others won't.  The 
  second
  patch, for master only, changes the startup-time message to FATAL.  If we
  decide to use FATAL in all branches, I would just squash them into one.
 
 +   errdetail(Please report this to pgsql-b...@postgresql.org.)));
 Er, is mentioning a mailing list in an error message really necessary?

Necessary?  No.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup -x/X doesn't play well with archive_mode wal_keep_segments

2015-01-04 Thread Fujii Masao
On Sun, Jan 4, 2015 at 5:47 AM, Andres Freund and...@2ndquadrant.com wrote:
 On 2015-01-03 16:03:36 +0100, Andres Freund wrote:
 On 2014-12-31 16:32:19 +0100, Andres Freund wrote:
  On 2014-12-05 16:18:02 +0900, Fujii Masao wrote:
   On Fri, Dec 5, 2014 at 9:28 AM, Andres Freund and...@2ndquadrant.com 
   wrote:
So I think we just need to make pg_basebackup create to .ready
files.
  
   s/.ready/.done? If yes, +1.
 
  That unfortunately requires changes to both backend and pg_basebackup to
  support fetch and stream modes respectively.
 
  I've attached a preliminary patch for this. I'd appreciate feedback. I
  plan to commit it in a couple of days, after some more
  testing/rereading.

 Attached are two updated patches that I am starting to backport
 now. I've fixed a couple minor oversights. And tested the patches.

 Pushed this after some major pain with backporting.

Thanks!

 pg_basebackup really
 changed heavily since it's introduction. And desparately needs some
 restructuring.

The patch seems to break pg_receivexlog. I got the following error message
while running pg_receivexlog.

pg_receivexlog: could not create archive status file
mmm/archive_status/00010003.done: No such file or
directory

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The return value of allocate_recordbuf()

2015-01-04 Thread Michael Paquier
On Mon, Dec 29, 2014 at 8:14 PM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
 On 12/26/2014 09:31 AM, Fujii Masao wrote:

 Hi,

 While reviewing FPW compression patch, I found that allocate_recordbuf()
 always returns TRUE though its source code comment says that FALSE is
 returned if out of memory. Its return value is checked in two places, but
 which is clearly useless.

 allocate_recordbuf() was introduced by 7fcbf6a, and then changed by
 2c03216 so that palloc() is used instead of malloc and FALSE is never
 returned
 even if out of memory. So this seems an oversight of 2c03216. Maybe
 we should change it so that it checks whether we can enlarge the memory
 with the requested size before actually allocating the memory?


 Hmm. There is no way to check beforehand if a palloc() will fail because of
 OOM. We could check for MaxAllocSize, though.

 Actually, before 2c03216, when we used malloc() here, the maximum record
 size was 4GB. Now it's only 1GB, because of MaxAllocSize. Are we OK with
 that, or should we use palloc_huge?

IMO, we should use repalloc_huge, and remove the status checks for
allocate_recordbuf and XLogReaderAllocate, relying on the fact that we
*will* report a failure if we have an OOM instead of returning a
pointer NULL. That's for example something logical.c relies on,
ctx-reader cannot be NULL (adding Andres in CC about that btw):
ctx-reader = XLogReaderAllocate(read_page, ctx);
ctx-reader-private_data = ctx;
Note that the other code paths return an OOM error message if the
reader allocated is NULL.

Speaking of which, attached are two patches.

The first one is for master implementing the idea above, making all
the previous OOM messages being handled by palloc  friends instead of
having each code path reporting the OOM individually with specific
message, and using repalloc_huge to cover the fact that we cannot
allocate more than 1GB with palloc.

Note that for 9.4, I think that we should complain about an OOM in
logical.c where malloc is used as now process would simply crash if
NULL is returned by XLogReaderAllocate. That's the object of the
second patch.

Thoughts?
-- 
Michael
From 5f22e4d1b202a5234e28fde97fd0a13a6fcf9171 Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Mon, 5 Jan 2015 14:15:08 +0900
Subject: [PATCH] Complain about OOM of XLOG reader allocation in logical
 decoding code

This will prevent a crash if allocation cannot be done properly by
XLogReaderAllocate as it uses a malloc.
---
 src/backend/replication/logical/logical.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/src/backend/replication/logical/logical.c b/src/backend/replication/logical/logical.c
index 80b6102..ea363c0 100644
--- a/src/backend/replication/logical/logical.c
+++ b/src/backend/replication/logical/logical.c
@@ -162,6 +162,12 @@ StartupDecodingContext(List *output_plugin_options,
 	ctx-slot = slot;
 
 	ctx-reader = XLogReaderAllocate(read_page, ctx);
+	if (!ctx-reader)
+		ereport(ERROR,
+(errcode(ERRCODE_OUT_OF_MEMORY),
+ errmsg(out of memory),
+ errdetail(Failed while allocating an XLog reading processor.)));
+
 	ctx-reader-private_data = ctx;
 
 	ctx-reorder = ReorderBufferAllocate();
-- 
2.2.1

From 0a208287a9cf1ffa5585ad835c15b5eae4645e8a Mon Sep 17 00:00:00 2001
From: Michael Paquier mich...@otacoo.com
Date: Mon, 5 Jan 2015 14:02:00 +0900
Subject: [PATCH] Fix XLOG reader allocation assuming that palloc can be NULL

2c03216 has updated allocate_recordbuf to use palloc instead of malloc
when allocating a record buffer, falsing the assumption taken by some
code paths expecting an OOM and reporting a dedicated error message
if a NULL pointer was created at allocation, something that cannot
happen with palloc because it always fails in case of an OOM. Hence
remove those checks, and use at the same time repalloc_huge, now needed
as well on frontend side by at least pg_xlogdump to cover as well the
fact that palloc cannot allocate more than 1GB.
---
 contrib/pg_xlogdump/pg_xlogdump.c   |  2 --
 src/backend/access/transam/xlog.c   |  8 +---
 src/backend/access/transam/xlogreader.c | 33 ++---
 src/common/fe_memutils.c|  6 ++
 src/include/common/fe_memutils.h|  1 +
 5 files changed, 18 insertions(+), 32 deletions(-)

diff --git a/contrib/pg_xlogdump/pg_xlogdump.c b/contrib/pg_xlogdump/pg_xlogdump.c
index 9f05e25..762269e 100644
--- a/contrib/pg_xlogdump/pg_xlogdump.c
+++ b/contrib/pg_xlogdump/pg_xlogdump.c
@@ -916,8 +916,6 @@ main(int argc, char **argv)
 
 	/* we have everything we need, start reading */
 	xlogreader_state = XLogReaderAllocate(XLogDumpReadPage, private);
-	if (!xlogreader_state)
-		fatal_error(out of memory);
 
 	/* first find a valid recptr to start from */
 	first_record = XLogFindNextRecord(xlogreader_state, private.startptr);
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index e54..7d2ca49 100644
--- 

Re: [HACKERS] orangutan seizes up during isolation-check

2015-01-04 Thread Michael Paquier
On Fri, Jan 2, 2015 at 1:04 PM, Noah Misch n...@leadboat.com wrote:
 The first attached patch, for all branches, adds LOG-level messages and an
 assertion.  So cassert builds will fail hard, while others won't.  The second
 patch, for master only, changes the startup-time message to FATAL.  If we
 decide to use FATAL in all branches, I would just squash them into one.

+   errdetail(Please report this to pgsql-b...@postgresql.org.)));
Er, is mentioning a mailing list in an error message really necessary?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] tracking commit timestamps

2015-01-04 Thread Fujii Masao
On Thu, Dec 4, 2014 at 12:08 PM, Fujii Masao masao.fu...@gmail.com wrote:
 On Wed, Dec 3, 2014 at 11:54 PM, Alvaro Herrera
 alvhe...@2ndquadrant.com wrote:
 Pushed with some extra cosmetic tweaks.

 I got the following assertion failure when I executed 
 pg_xact_commit_timestamp()
 in the standby server.

 =# select pg_xact_commit_timestamp('1000'::xid);
 TRAP: FailedAssertion(!(((oldestCommitTs) != ((TransactionId) 0)) ==
 ((newestCommitTs) != ((TransactionId) 0))), File: commit_ts.c,
 Line: 315)
 server closed the connection unexpectedly
 This probably means the server terminated abnormally
 before or while processing the request.
 The connection to the server was lost. Attempting reset: 2014-12-04
 12:01:08 JST sby1 LOG:  server process (PID 15545) was terminated by
 signal 6: Aborted
 2014-12-04 12:01:08 JST sby1 DETAIL:  Failed process was running:
 select pg_xact_commit_timestamp('1000'::xid);

 The way to reproduce this problem is

 #1. set up and start the master and standby servers with
 track_commit_timestamp disabled
 #2. enable track_commit_timestamp in the master and restart the master
 #3. run some write transactions
 #4. enable track_commit_timestamp in the standby and restart the standby
 #5. execute select pg_xact_commit_timestamp('1000'::xid) in the standby

 BTW, at the step #4, I got the following log messages. This might be a hint 
 for
 this problem.

 LOG:  file pg_commit_ts/ doesn't exist, reading as zeroes
 CONTEXT:  xlog redo Transaction/COMMIT: 2014-12-04 12:00:16.428702+09;
 inval msgs: catcache 59 catcache 58 catcache 59 catcache 58 catcache
 45 catcache 44 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7
 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7
 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608
 relcache 16384

This problem still happens in the master.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] event trigger test exception message

2015-01-04 Thread David Fetter
On Sun, Jan 04, 2015 at 12:20:09PM -0500, Andrew Dunstan wrote:
 
 I don't wish to seem humorless, but I think this should probably be changed:
 
 root/HEAD/pgsql/src/test/regress/sql/event_trigger.sql:248:  RAISE EXCEPTION
 'I''m sorry Sir, No Rewrite Allowed.';
 
 Quite apart from any other reason, the Sir does seem a bit sexist - we
 have no idea of the gender of the reader. Probably just 'sorry, no rewrite
 allowed' would suffice.

We should change it to, I'm sorry.  I can't do that.
http://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_%28film%29

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] The return value of allocate_recordbuf()

2015-01-04 Thread Michael Paquier
On Thu, Jan 1, 2015 at 1:10 AM, Robert Haas robertmh...@gmail.com wrote:
 On Mon, Dec 29, 2014 at 6:14 AM, Heikki Linnakangas
 hlinnakan...@vmware.com wrote:
 Hmm. There is no way to check beforehand if a palloc() will fail because of
 OOM. We could check for MaxAllocSize, though.

 I think we need a version of palloc that returns NULL instead of
 throwing an error.  The error-throwing behavior is for the best in
 almost every case, but I think the no-error version would find enough
 users to be worthwhile.
Compression is one of those areas, be it compression of WAL or another
type. The new API would allow to fallback to the non-compression code
path if buffer allocation for compression cannot be done because of an
OOM.

FWIW, I actually looked at how to do that a couple of weeks back, and
you just need a wrapper function, whose content is the existing
AllocSetAlloc, taking an additional boolean flag to trigger an ERROR
or leave with NULL if an OOM appears. On top of that we will need a
new method in MemoryContextMethods, let's call it alloc_safe, for its
equivalent, the new palloc_safe.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in function header for recently added function errhidecontext

2015-01-04 Thread Amit Kapila
/*
 * errhidestmt --- optionally suppress CONTEXT: field of log entry
 *
 * This should only be used for verbose debugging messages where the
repeated
 * inclusion of CONTEXT: bloats the log volume too much.
 */
int
errhidecontext(bool hide_ctx)


Here in function header, function name should be
errhidecontext.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com