Re: Fetching timeline during recovery

2020-02-11 Thread Jehan-Guillaume de Rorthais
On Fri, 31 Jan 2020 15:12:30 +0900
Michael Paquier  wrote:

> On Thu, Jan 23, 2020 at 05:54:08PM +0100, Jehan-Guillaume de Rorthais wrote:
> > Please find the new version of the patch in attachment.
> 
> To be honest, I find the concept of this patch confusing.
> pg_stat_wal_receiver is just a one-one mapping with the shared memory
> state of the WAL receiver itself and show data *if and only if* a WAL
> receiver is running and iff it is ready to display any data, so I'd
> rather not change its nature

If you are talking about the pg_stat_wal_receiver view, I don't have a strong
opinion on this anyway as I vote 0 when discussing it. My current patch
doesn't alter its nature.

> and it has nothing to do with the state of WAL being applied by the startup
> process.

Indeed, I was feeling this was a bad design to add these columns, as stated in
my last mail. So I withdraw this.

> So this gets a -1 from me.

OK.

[...]
> Isn't what you are looking for here a different system view which maps
> directly to XLogCtl so as you can retrieve the status of the applied
> WAL at recovery anytime

My main objective is received LSN/TLI. This is kept by WalRcv for streaming.
That's why pg_stat_wal_receiver was the very good place for my need. But again,
you are right, I shouldn't have add the replied bits to it.

> say pg_stat_recovery?

I finally dig this path. I was in the hope we could find something
simpler and lighter, but other solutions we studied so far (thanks all for your
time) were all discarded [1].

A new pg_stat_get_recovery() view might be useful for various monitoring
purpose. After poking around in the code, it seems the patch would be bigger
than previous solutions, so I prefer discussing the specs first. 

At a first glance, I would imagine the following columns as a minimal patch:

* source: stream, archive or pg_wal
* write/flush/replayed LSN
* write/flush/replayed TLI

This has already some heavy impact in the code. Source might be taken from
xlog.c:currentSource, so it should probably be included in XLogCtl to be
accessible from any backend.

As replayed LSN/TLI comes from XLogCtl too, we might probably need a new
dedicated function to gather these fields plus currentSource under the same
info_lck.

Next, write lsn/tli is not accessible from WalRcv, only flush. So either we do
not include it, or we would probably need to replace WalRcv->receivedUpto with
existing LogstreamResult.

Next, there's no stats about wal shipping recovery. Restoring a WAL from
archive do not increment anything about write/flush LSN/TLI. I wonder if both
wal_receiver stats and WAL shipping stats might be merged together in the same
refactored structure in shmem, as they might share a fair number of field
together? This would be pretty invasive in the code, but I feel it's heavier to
add another new struct in shmem just to track WAL shipping stats whereas WalRcv
already exists there.

Now, I think the following additional field might be useful for monitoring. But
as this is out my my original scope, I prefer discussing how useful this might
be:

* start_time: start time of the current source
* restored_count: total number of WAL restored. We might want to split this
  counter to track each method individually.
* last_received_time: last time we received something from the current source
* last_fail_time: last failure time, whatever the source

Thanks for reading up to here!

Regards,


[1] even if I still hope the pg_stat_get_wal_receiver might still gather some
more positive vote :)




Re: Fetching timeline during recovery

2020-01-30 Thread Michael Paquier
On Thu, Jan 23, 2020 at 05:54:08PM +0100, Jehan-Guillaume de Rorthais wrote:
> Please find the new version of the patch in attachment.

To be honest, I find the concept of this patch confusing.
pg_stat_wal_receiver is just a one-one mapping with the shared memory
state of the WAL receiver itself and show data *if and only if* a WAL
receiver is running and iff it is ready to display any data, so I'd
rather not change its nature and it has nothing to do with the state
of WAL being applied by the startup process.  So this gets a -1 from
me.

-   /*
-* No WAL receiver (or not ready yet), just return a tuple with NULL
-* values
-*/
-   if (pid == 0 || !ready_to_display)
-   PG_RETURN_NULL();
Note that this took a couple of attempts to get right, so I'd rather
not change this part of the logic on security grounds.

Isn't what you are looking for here a different system view which maps
directly to XLogCtl so as you can retrieve the status of the applied
WAL at recovery anytime, say pg_stat_recovery?

It is the end of the CF, I am marking this patch as returned with
feedback for now.
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2020-01-23 Thread Jehan-Guillaume de Rorthais
On Tue, 07 Jan 2020 15:57:29 +0900 (JST)
Kyotaro Horiguchi  wrote:

> At Mon, 23 Dec 2019 15:38:16 +0100, Jehan-Guillaume de Rorthais
>  wrote in 
> >  1. we could decide to remove this filter to expose the data even when no
> > wal receiver is active. It's the same behavior than pg_stat_subscription
> > view. It could introduce regression from tools point of view, but adds some
> > useful information. I would vote 0 for it.  
> 
> A subscription exists since it is defined and regardless whether it is
> active or not. It is strange that we see a line in the view if
> replication is not configured. But it is reasonable to show if it is
> configured.  We could do that by checking PrimaryConnInfo. (I would
> vote +0.5 for it).

Thanks. I put this on hold for now, I'm waiting for some more opinons as
there's no strong position yet.

> >  2. we could extend it with new replayed lsn/tli fields. I would vote +1 for
> > it.  
> 
> +1. As of now a walsender lives for just one timeline, because it ends
> for disconnection from walsender when the master moves to a new
> timeline.  That being said, we already have the columns for TLI for
> both starting and received-up-to LSN so we would need it also for
> replayed LSN for a consistent looking.

I added applied_lsn and applied_tli to the pg_stat_get_wal_receiver function
output columns.

However, note that applying xlog is the responsibility of the startup process,
not the wal receiver one. Is it OK that pg_stat_get_wal_receiver
returns stats not directly related to the wal receiver?

> The function is going to show "streaming" but conninfo is not shown
> until connection establishes. That state is currently hidden by the
> PID filtering of the view. We might need to keep the WALRCV_STARTING
> state until connection establishes.

Indeed, fixed.

> sender_host and sender_port have bogus values until connection is
> actually established when conninfo is changed. They as well as
> conninfo should be hidden until connection is established, too, I
> think.

Fixed as well.

Please find the new version of the patch in attachment.

Thank you for your review!
>From d1e5d6c33e193626f05911462e66a6c96366bfa6 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Tue, 31 Dec 2019 18:29:13 +0100
Subject: [PATCH] Always expose available stats from wal receiver

Makes admin function pg_stat_get_wal_receiver() return available data
from WalRcv in shared memory, whatever the state of the wal receiver
process.

This allows supervision or HA tools to gather various physical
replication stats even when the wal receiver is stopped. For example,
the latest timeline the wal receiver was receiving before shutting
down.

The behavior of the pg_stat_wal_receiver view has been kept to avoid
regressions: it returns no row when the wal receiver is shut down.
---
 src/backend/replication/walreceiver.c  | 61 --
 src/include/catalog/pg_proc.dat|  6 +--
 src/test/recovery/t/004_timeline_switch.pl | 12 -
 3 files changed, 48 insertions(+), 31 deletions(-)

diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index a5e85d32f3..e4273b7f55 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -226,7 +226,6 @@ WalReceiverMain(void)
 	}
 	/* Advertise our PID so that the startup process can kill us */
 	walrcv->pid = MyProcPid;
-	walrcv->walRcvState = WALRCV_STREAMING;
 
 	/* Fetch information required to start streaming */
 	walrcv->ready_to_display = false;
@@ -295,6 +294,7 @@ WalReceiverMain(void)
 		strlcpy((char *) walrcv->sender_host, sender_host, NI_MAXHOST);
 
 	walrcv->sender_port = sender_port;
+	walrcv->walRcvState = WALRCV_STREAMING;
 	walrcv->ready_to_display = true;
 	SpinLockRelease(>mutex);
 
@@ -1368,6 +1368,8 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	TimeLineID	receive_start_tli;
 	XLogRecPtr	received_lsn;
 	TimeLineID	received_tli;
+	XLogRecPtr	applied_lsn;
+	TimeLineID	applied_tli;
 	TimestampTz last_send_time;
 	TimestampTz last_receipt_time;
 	XLogRecPtr	latest_end_lsn;
@@ -1379,6 +1381,7 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 
 	/* Take a lock to ensure value consistency */
 	SpinLockAcquire(>mutex);
+	applied_lsn = GetXLogReplayRecPtr(_tli);
 	pid = (int) WalRcv->pid;
 	ready_to_display = WalRcv->ready_to_display;
 	state = WalRcv->walRcvState;
@@ -1396,13 +1399,6 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	strlcpy(conninfo, (char *) WalRcv->conninfo, sizeof(conninfo));
 	SpinLockRelease(>mutex);
 
-	/*
-	 * No WAL receiver (or not ready yet), just return a tuple with NULL
-	 * values
-	 */
-	if (pid == 0 || !ready_to_display)
-		PG_RETURN_NULL();
-
 	/* determine result type */
 	if (get_call_result_type(fcinfo, NULL, ) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");
@@ -1411,7 +1407,10 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	nulls = palloc0(sizeof(bool) * tupdesc->natts);
 
 	/* Fetch values 

Re: Fetching timeline during recovery

2020-01-06 Thread Kyotaro Horiguchi
At Fri, 3 Jan 2020 16:11:38 +0100, Jehan-Guillaume de Rorthais 
 wrote in 
> Hi,
> 
> On Mon, 23 Dec 2019 15:38:16 +0100
> Jehan-Guillaume de Rorthais  wrote:
> [...]
> > My idea would be to return a row from pg_stat_get_wal_receiver() as soon as
> > a wal receiver has been replicating during the uptime of the standby, no
> > matter if there's one currently working or not. If no wal receiver is 
> > active,
> > the "pid" field would be NULL and the "status" would reports eg. "inactive".
> > All other fields would report their last known value as they are kept in
> > shared memory WalRcv struct.
> 
> Please, find in attachment a patch implementing the above proposal.

At Mon, 23 Dec 2019 15:38:16 +0100, Jehan-Guillaume de Rorthais 
 wrote in 
>  1. we could decide to remove this filter to expose the data even when no wal
> receiver is active. It's the same behavior than pg_stat_subscription view.
> It could introduce regression from tools point of view, but adds some
> useful information. I would vote 0 for it.

A subscription exists since it is defined and regardless whether it is
active or not. It is strange that we see a line in the view if
replication is not configured. But it is reasonable to show if it is
configured.  We could do that by checking PrimaryConnInfo. (I would
vote +0.5 for it).

>  2. we could extend it with new replayed lsn/tli fields. I would vote +1 for
> it.

+1. As of now a walsender lives for just one timeline, because it ends
for disconnection from walsender when the master moves to a new
timeline.  That being said, we already have the columns for TLI for
both starting and received-up-to LSN so we would need it also for
replayed LSN for a consistent looking.

The function is going to show "streaming" but conninfo is not shown
until connection establishes. That state is currently hidden by the
PID filtering of the view. We might need to keep the WALRCV_STARTING
state until connection establishes.

sender_host and sender_port have bogus values until connection is
actually established when conninfo is changed. They as well as
conninfo should be hidden until connection is established, too, I
think.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Fetching timeline during recovery

2020-01-03 Thread Jehan-Guillaume de Rorthais
Hi,

On Mon, 23 Dec 2019 15:38:16 +0100
Jehan-Guillaume de Rorthais  wrote:
[...]
> My idea would be to return a row from pg_stat_get_wal_receiver() as soon as
> a wal receiver has been replicating during the uptime of the standby, no
> matter if there's one currently working or not. If no wal receiver is active,
> the "pid" field would be NULL and the "status" would reports eg. "inactive".
> All other fields would report their last known value as they are kept in
> shared memory WalRcv struct.

Please, find in attachment a patch implementing the above proposal.

Regards,
>From 5641d8c5d46968873d8b8e1d3c1c0de10551741e Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Tue, 31 Dec 2019 18:29:13 +0100
Subject: [PATCH] Always expose available stats from wal receiver

Makes admin function pg_stat_get_wal_receiver() return available data
from WalRcv in shared memory, whatever the state of the wal receiver
process.

This allows supervision or HA tools to gather various physical
replication stats even when the wal receiver is stopped. For example,
the latest timeline the wal receiver was receiving before shutting
down.

The behavior of the pg_stat_wal_receiver view has been kept to avoid
regressions: it returns no row when the wal receiver is shut down.
---
 src/backend/replication/walreceiver.c  | 14 +-
 src/test/recovery/t/004_timeline_switch.pl | 12 +++-
 2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index a4de8a9cd8..1207f145b8 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -1354,13 +1354,6 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	strlcpy(conninfo, (char *) WalRcv->conninfo, sizeof(conninfo));
 	SpinLockRelease(>mutex);
 
-	/*
-	 * No WAL receiver (or not ready yet), just return a tuple with NULL
-	 * values
-	 */
-	if (pid == 0 || !ready_to_display)
-		PG_RETURN_NULL();
-
 	/* determine result type */
 	if (get_call_result_type(fcinfo, NULL, ) != TYPEFUNC_COMPOSITE)
 		elog(ERROR, "return type must be a row type");
@@ -1369,7 +1362,10 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	nulls = palloc0(sizeof(bool) * tupdesc->natts);
 
 	/* Fetch values */
-	values[0] = Int32GetDatum(pid);
+	if (pid == 0)
+		nulls[0] = true;
+	else
+		values[0] = Int32GetDatum(pid);
 
 	if (!is_member_of_role(GetUserId(), DEFAULT_ROLE_READ_ALL_STATS))
 	{
@@ -1422,7 +1418,7 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 			nulls[12] = true;
 		else
 			values[12] = Int32GetDatum(sender_port);
-		if (*conninfo == '\0')
+		if (*conninfo == '\0'  || !ready_to_display)
 			nulls[13] = true;
 		else
 			values[13] = CStringGetTextDatum(conninfo);
diff --git a/src/test/recovery/t/004_timeline_switch.pl b/src/test/recovery/t/004_timeline_switch.pl
index 7e952d3667..cdcdd2d981 100644
--- a/src/test/recovery/t/004_timeline_switch.pl
+++ b/src/test/recovery/t/004_timeline_switch.pl
@@ -6,7 +6,7 @@ use warnings;
 use File::Path qw(rmtree);
 use PostgresNode;
 use TestLib;
-use Test::More tests => 2;
+use Test::More tests => 4;
 
 $ENV{PGDATABASE} = 'postgres';
 
@@ -37,6 +37,11 @@ $node_master->safe_psql('postgres',
 $node_master->wait_for_catchup($node_standby_1, 'replay',
 	$node_master->lsn('write'));
 
+# Check received timeline from pg_stat_get_wal_receiver() on standby 1
+my $node_standby_1_lsn = $node_standby_1->safe_psql('postgres',
+	'SELECT received_tli FROM pg_stat_get_wal_receiver()');
+is($node_standby_1_lsn, 1, 'check received timeline on standby 1');
+
 # Stop and remove master
 $node_master->teardown_node;
 
@@ -66,3 +71,8 @@ $node_standby_1->wait_for_catchup($node_standby_2, 'replay',
 my $result =
   $node_standby_2->safe_psql('postgres', "SELECT count(*) FROM tab_int");
 is($result, qq(2000), 'check content of standby 2');
+
+# Check received timeline from pg_stat_get_wal_receiver() on standby 2
+my $node_standby_2_lsn = $node_standby_2->safe_psql('postgres',
+	'SELECT received_tli FROM pg_stat_get_wal_receiver()');
+is($node_standby_2_lsn, 2, 'check received timeline on standby 2');
-- 
2.20.1



Re: Fetching timeline during recovery

2019-12-23 Thread Jehan-Guillaume de Rorthais
On Mon, 23 Dec 2019 12:36:56 +0900
Michael Paquier  wrote:

> On Fri, Dec 20, 2019 at 11:14:28AM +0100, Jehan-Guillaume de Rorthais wrote:
> > Yes, that would be great but sadly, it would introduce a regression on
> > various tools relying on them. At least, the one doing "select *" or most
> > probably "select func()".
> > 
> > But anyway, adding 5 funcs is not a big deal neither. Too bad they are so
> > close to existing ones though.  
> 
> Consistency of the data matters a lot if we want to build reliable
> tools on top of them in case someone would like to compare the various
> modes, and using different functions for those fields creates locking
> issues (somewhat the point of Fujii-san upthread?).

To sum up: the original patch was about fetching the current timeline of a
standby from memory without relying on the asynchronous controlfile or
pg_stat_get_wal_receiver() which only shows data when the wal_receiver is
running.

Fujii-san was pointing out we must fetch both the received LSN and its timeline
with the same lock so they are consistent.

Michael is now discussing about fetching multiple LSN and their timeline,
while keeping them consistent, eg. received+tli and applied+tli. Thank you for
pointing this out. 

I thought about various way to deal with this concern and would like to
discuss/defend a new option based on existing pg_stat_get_wal_receiver()
function. The only problem I'm facing with this function is that it returns
a full NULL record if no wal receiver is active.

My idea would be to return a row from pg_stat_get_wal_receiver() as soon as
a wal receiver has been replicating during the uptime of the standby, no
matter if there's one currently working or not. If no wal receiver is active,
the "pid" field would be NULL and the "status" would reports eg. "inactive".
All other fields would report their last known value as they are kept in
shared memory WalRcv struct.

From the monitoring and HA point of view, we are now able to know that a wal
receiver existed, the lsn it has stopped, on what timeline, all consistent
with the same lock. That answer my original goal. We could extend this with two
more fields about replayed lsn and timeline to address last Michael's concern
if we decide it's really needed (and I think it's a valid concern for eg.
monitoring tools).

There's some more potential discussion about the pg_stat_wal_receiver view
which relies on pg_stat_get_wal_receiver(). My proposal do not introduce
regression with it as the view already filter out NULL data using "WHERE s.pid
IS NOT NULL". But:

 1. we could decide to remove this filter to expose the data even when no wal
receiver is active. It's the same behavior than pg_stat_subscription view.
It could introduce regression from tools point of view, but adds some
useful information. I would vote 0 for it.
 2. we could extend it with new replayed lsn/tli fields. I would vote +1 for
it.

On the "dark" side of this proposal, we do not deal with the primary side. We
still have no way to fetch various lsn+tli from the WAL Writer. However, I
included pg_current_wal_lsn_tl() in my original patch only for homogeneity
reason and the discussion slipped on this side while paying attention to the
user facing function logic and homogeneity. If this discussion decide this is a
useful feature, I think it could be addressed in another patch (and I volunteer
to deal with it).

Bellow the sum up this 6th proposition with examples. When wal receiver never
started (same as today):

  -[ RECORD 1 ]-+--
  pid   | Ø
  status| Ø
  receive_start_lsn | Ø
  receive_start_tli | Ø
  received_lsn  | Ø
  received_tli  | Ø
  last_msg_send_time| Ø
  last_msg_receipt_time | Ø
  latest_end_lsn| Ø
  latest_end_time   | Ø
  slot_name | Ø
  sender_host   | Ø
  sender_port   | Ø
  conninfo  | Ø

When wal receiver is active:

  $ select * from  pg_stat_get_wal_receiver();
  -[ RECORD 1 ]-+-
  pid   | 8576
  status| streaming
  receive_start_lsn | 0/400
  receive_start_tli | 1
  received_lsn  | 0/4000148
  received_tli  | 1
  last_msg_send_time| 2019-12-23 12:28:52.588738+01
  last_msg_receipt_time | 2019-12-23 12:28:52.588839+01
  latest_end_lsn| 0/4000148
  latest_end_time   | 2019-12-23 11:15:43.431657+01
  slot_name | Ø
  sender_host   | /tmp
  sender_port   | 15441
  conninfo  | port=15441 application_name=s

When wal receiver is not running and shared memory WalRcv is reporting past
activity:

  $ select * from  pg_stat_get_wal_receiver();
  -[ RECORD 1 ]-+-
  pid   | Ø
  status| inactive
  receive_start_lsn | 0/400
  receive_start_tli | 1
  received_lsn  | 0/4000148
  received_tli  

Re: Fetching timeline during recovery

2019-12-22 Thread Michael Paquier
On Fri, Dec 20, 2019 at 11:14:28AM +0100, Jehan-Guillaume de Rorthais wrote:
> Yes, that would be great but sadly, it would introduce a regression on various
> tools relying on them. At least, the one doing "select *" or most
> probably "select func()".
> 
> But anyway, adding 5 funcs is not a big deal neither. Too bad they are so 
> close
> to existing ones though.

Consistency of the data matters a lot if we want to build reliable
tools on top of them in case someone would like to compare the various
modes, and using different functions for those fields creates locking
issues (somewhat the point of Fujii-san upthread?).  If nobody likes
the approach of one function, returning one row, taking in input the
mode wanted, then I would not really object Stephen's idea on the
matter about having a multi-column function returning one row.
issues

>> Right. It is a restriction of polymorphic functions. It is in the same
>> relation with pg_stop_backup() and pg_stop_backup(true).

(pg_current_wal_lsn & co talk about LSNs, not TLIs).
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-12-20 Thread Jehan-Guillaume de Rorthais
On Fri, 20 Dec 2019 13:41:25 +0900 (JST)
Kyotaro Horiguchi  wrote:

> At Fri, 20 Dec 2019 00:35:19 +0100, Jehan-Guillaume de Rorthais
>  wrote in 
> > On Fri, 13 Dec 2019 16:12:55 +0900
> > Michael Paquier  wrote:  
> 
> The first one;
> 
> > > I mentioned a SRF function which takes an input argument, but that
> > > makes no sense.  What I would prefer having is just having one
> > > function, returning one row (LSN, TLI), using in input one argument to
> > > extract the WAL information the caller wants with five possible cases
> > > (write, insert, flush, receive, replay).  
> > 
> > It looks odd when we look at other five existing functions of the same
> > family but without the tli. And this user interaction with admin function
> > is quite different of what we are used to with other admin funcs. But
> > mostly, when I think of such function, I keep thinking this parameter
> > should be a WHERE clause after a SRF function.
> > 
> > -1  
> 
> It is realted to the third one, it may be annoying that the case names
> cannot have an aid of psql-completion..

indeed.

> The second one;
> 
> > > Then, what you are referring to is one function which returns all
> > > (LSN,TLI) for the five cases (write, insert, etc.), so it would return
> > > one row with 10 columns, with NULL mapping to the values which have no
> > > meaning (like replay on a primary).  
> > 
> > This would looks like some other pg_stat_* functions, eg.
> > pg_stat_get_archiver. I'm OK with this. This could even be turned as a
> > catalog view.
> > 
> > However, what's the point of gathering all the values eg from a production
> > cluster? Is it really useful to compare current/insert/flush LSN from wal
> > writer?  
> 
> There is a period where pg_controldata shows the previous TLI after
> promotion. It's useful if we can read the up-to-date TLI from live
> standby. I thought that this project is for that case..

I was not asking about the usefulness of LSN+TLI itself. 
I was wondering about the usecase of gathering all 6 cols current+tli,
insert+tli and flush+tli from a production/primary cluster.

[...]
> > As a fourth possibility, as I badly explained my last implementation
> > details, I still hope we can keep it in the loop here. Just overload
> > existing functions with ones that takes a boolean as parameter and add the
> > TLI as a second field, eg.:
> > 
> > Name   | Result type  | Argument data types
> > ---+--+---
> > pg_current_wal_lsn | pg_lsn   |
> > pg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli
> > int  
> 
> I prefer this one, in the sense of similarity with existing functions.

thanks

> > And the fifth one, implementing brand new functions:
> > 
> >  pg_current_wal_lsn_tli
> >  pg_current_wal_insert_lsn_tli
> >  pg_current_wal_flush_lsn_tli
> >  pg_last_wal_receive_lsn_tli
> >  pg_last_wal_replay_lsn_tli  
> 
> M We should remove exiting ones instead? (Of couse we don't,
> though.)

Yes, that would be great but sadly, it would introduce a regression on various
tools relying on them. At least, the one doing "select *" or most
probably "select func()".

But anyway, adding 5 funcs is not a big deal neither. Too bad they are so close
to existing ones though.

> > > I actually prefer the first one, and you mentioned the second.  But
> > > there could be a point in doing the third one.  An advantage of the
> > > second and third ones is that you may be able to get a consistent view
> > > of all the data, but it means holding locks to look at the values a
> > > bit longer.  Let's see what others think.  
> > 
> > I like the fourth one, but I was not able to return only one field if given
> > parameter is false or NULL. Giving false as argument to these funcs has no
> > meaning compared to the original one without arg. I end up with this
> > solution because I was worried about adding five more funcs really close to
> > some existing one.  
> 
> Right. It is a restriction of polymorphic functions. It is in the same
> relation with pg_stop_backup() and pg_stop_backup(true).

indeed.





Re: Fetching timeline during recovery

2019-12-19 Thread Kyotaro Horiguchi
At Fri, 20 Dec 2019 00:35:19 +0100, Jehan-Guillaume de Rorthais 
 wrote in 
> On Fri, 13 Dec 2019 16:12:55 +0900
> Michael Paquier  wrote:

The first one;

> > I mentioned a SRF function which takes an input argument, but that
> > makes no sense.  What I would prefer having is just having one
> > function, returning one row (LSN, TLI), using in input one argument to
> > extract the WAL information the caller wants with five possible cases
> > (write, insert, flush, receive, replay).
> 
> It looks odd when we look at other five existing functions of the same family
> but without the tli. And this user interaction with admin function is quite
> different of what we are used to with other admin funcs. But mostly, when I
> think of such function, I keep thinking this parameter should be a WHERE
> clause after a SRF function.
> 
> -1

It is realted to the third one, it may be annoying that the case names
cannot have an aid of psql-completion..


The second one;

> > Then, what you are referring to is one function which returns all
> > (LSN,TLI) for the five cases (write, insert, etc.), so it would return
> > one row with 10 columns, with NULL mapping to the values which have no
> > meaning (like replay on a primary).
> 
> This would looks like some other pg_stat_* functions, eg. 
> pg_stat_get_archiver.
> I'm OK with this. This could even be turned as a catalog view.
> 
> However, what's the point of gathering all the values eg from a production
> cluster? Is it really useful to compare current/insert/flush LSN from wal
> writer?

There is a period where pg_controldata shows the previous TLI after
promotion. It's useful if we can read the up-to-date TLI from live
standby. I thought that this project is for that case..

> It's easier to answer from a standby point of view as the lag between received
> and replayed might be interesting to report in various situations.


The third one;

> > And on top of that we have a third possibility: one SRF function
> > returning 5 rows with three attributes (mode, LSN, TLI), where mode
> > corresponds to one value in the set {write, insert, etc.}.
> 
> I prefer the second one. Just select the field(s) you need, no need WHERE
> clause, similar to some other stats function.
> 
> -1

It might be clean in a sense, but I don't come up with the case where
the format is useful..

Anyway as the same with the first one, the case names (write, insert,
flush, receive, replay) comes from two different machineries and
showing them in a row could be confusing.


> As a fourth possibility, as I badly explained my last implementation details, 
> I
> still hope we can keep it in the loop here. Just overload existing functions
> with ones that takes a boolean as parameter and add the TLI as a second field,
> eg.:
> 
> Name   | Result type  | Argument data types
> ---+--+---
> pg_current_wal_lsn | pg_lsn   |
> pg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli int

I prefer this one, in the sense of similarity with existing functions.

> And the fifth one, implementing brand new functions:
> 
>  pg_current_wal_lsn_tli
>  pg_current_wal_insert_lsn_tli
>  pg_current_wal_flush_lsn_tli
>  pg_last_wal_receive_lsn_tli
>  pg_last_wal_replay_lsn_tli

M We should remove exiting ones instead? (Of couse we don't,
though.)

> > I actually prefer the first one, and you mentioned the second.  But
> > there could be a point in doing the third one.  An advantage of the
> > second and third ones is that you may be able to get a consistent view
> > of all the data, but it means holding locks to look at the values a
> > bit longer.  Let's see what others think.
> 
> I like the fourth one, but I was not able to return only one field if given
> parameter is false or NULL. Giving false as argument to these funcs has no
> meaning compared to the original one without arg. I end up with this solution
> because I was worried about adding five more funcs really close to some
> existing one.

Right. It is a restriction of polymorphic functions. It is in the same
relation with pg_stop_backup() and pg_stop_backup(true).

> Fifth one is more consistent with what we already have.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Fetching timeline during recovery

2019-12-19 Thread Jehan-Guillaume de Rorthais
On Fri, 13 Dec 2019 16:12:55 +0900
Michael Paquier  wrote:

> On Wed, Dec 11, 2019 at 10:45:25AM -0500, Stephen Frost wrote:
> > I'm confused- wouldn't the above approach be a function that's returning
> > only one row, if you had a bunch of columns and then had NULL values for
> > those cases that didn't apply..?  Or, if you were thinking about the SRF
> > approach that you suggested, you could use a WHERE clause to make it
> > only one row...  Though I can see how it's nicer to just have one row in
> > some cases which is why I was suggesting the "bunch of columns"
> > approach.  
> 
> Oh, sorry.  I see the confusion now and that's my fault.  In
> https://www.postgresql.org/message-id/20191211052002.gk72...@paquier.xyz
> I mentioned a SRF function which takes an input argument, but that
> makes no sense.  What I would prefer having is just having one
> function, returning one row (LSN, TLI), using in input one argument to
> extract the WAL information the caller wants with five possible cases
> (write, insert, flush, receive, replay).

It looks odd when we look at other five existing functions of the same family
but without the tli. And this user interaction with admin function is quite
different of what we are used to with other admin funcs. But mostly, when I
think of such function, I keep thinking this parameter should be a WHERE
clause after a SRF function.

-1

> Then, what you are referring to is one function which returns all
> (LSN,TLI) for the five cases (write, insert, etc.), so it would return
> one row with 10 columns, with NULL mapping to the values which have no
> meaning (like replay on a primary).

This would looks like some other pg_stat_* functions, eg. pg_stat_get_archiver.
I'm OK with this. This could even be turned as a catalog view.

However, what's the point of gathering all the values eg from a production
cluster? Is it really useful to compare current/insert/flush LSN from wal
writer?

It's easier to answer from a standby point of view as the lag between received
and replayed might be interesting to report in various situations.

> And on top of that we have a third possibility: one SRF function
> returning 5 rows with three attributes (mode, LSN, TLI), where mode
> corresponds to one value in the set {write, insert, etc.}.

I prefer the second one. Just select the field(s) you need, no need WHERE
clause, similar to some other stats function.

-1


As a fourth possibility, as I badly explained my last implementation details, I
still hope we can keep it in the loop here. Just overload existing functions
with ones that takes a boolean as parameter and add the TLI as a second field,
eg.:

Name   | Result type  | Argument data types
---+--+---
pg_current_wal_lsn | pg_lsn   |
pg_current_wal_lsn | SETOF record | with_tli bool, OUT lsn pg_lsn, OUT tli int


And the fifth one, implementing brand new functions:

 pg_current_wal_lsn_tli
 pg_current_wal_insert_lsn_tli
 pg_current_wal_flush_lsn_tli
 pg_last_wal_receive_lsn_tli
 pg_last_wal_replay_lsn_tli

> I actually prefer the first one, and you mentioned the second.  But
> there could be a point in doing the third one.  An advantage of the
> second and third ones is that you may be able to get a consistent view
> of all the data, but it means holding locks to look at the values a
> bit longer.  Let's see what others think.

I like the fourth one, but I was not able to return only one field if given
parameter is false or NULL. Giving false as argument to these funcs has no
meaning compared to the original one without arg. I end up with this solution
because I was worried about adding five more funcs really close to some
existing one.

Fifth one is more consistent with what we already have.

Thanks again.

Regards,




Re: Fetching timeline during recovery

2019-12-19 Thread Jehan-Guillaume de Rorthais
On Wed, 11 Dec 2019 14:20:02 +0900
Michael Paquier  wrote:

> On Thu, Sep 26, 2019 at 07:20:46PM +0200, Jehan-Guillaume de Rorthais wrote:
> > If this solution is accepted, some other function of the same family might
> > be good candidates as well, for the sake of homogeneity:
> > 
> > * pg_current_wal_insert_lsn
> > * pg_current_wal_flush_lsn
> > * pg_last_wal_replay_lsn
> > 
> > However, I'm not sure how useful this would be.
> > 
> > Thanks again for your time, suggestions and review!  
> 
> +{ oid => '3435', descr => 'current wal flush location',
> +  proname => 'pg_last_wal_receive_lsn', provolatile => 'v',
> proisstrict => 'f',
> This description is incorrect.

Indeed. And the one for pg_current_wal_lsn(bool) as well.

> And please use OIDs in the range of 8000~ for patches in
> development.  You could just use src/include/catalog/unused_oids which
> would point out a random range.

Thank you for this information, I wasn't aware.

> +   if (recptr == 0) {
> +   nulls[0] = 1;
> +   nulls[1] = 1;
> +   }
> The indendation of the code is incorrect, these should use actual
> booleans and recptr should be InvalidXLogRecPtr (note also the
> existence of the macro XLogRecPtrIsInvalid).  Just for the style.

Fixed on my side. Thanks.

> As said in the last emails exchanged on this thread, I don't see how
> you cannot use multiple functions which have different meaning
> depending on if the cluster is a primary or a standby knowing that we
> have two different concepts of WAL when at recovery: the received
> LSN and the replayed LSN, and three concepts for primaries (insert,
> current, flush).  

As I wrote in my previous email, existing functions could be overloaded
as well for the sake of homogeneity. So the five of them would have similar
behavior/API.

> I agree as well with the point of Fujii-san about
> not returning the TLI and the LSN across different functions as this
> opens the door for a risk of inconsistency for the data received by
> the client.

My last patch fixed that, indeed.

> + * When the first parameter (variable 'with_tli') is true, returns the
> current
> + * timeline as second field. If false, second field is null.
> I don't see much the point of having this input parameter which
> determines the NULL-ness of one of the result columns, and I think
> that you had better use a completely different function name for each
> one of them instead of enforcing the functions.  Let's remember that a
> lot of tools use the existing functions directly in the SELECT clause
> for LSN calculations, which is just a 64-bit integer *without* a
> timeline assigned to it.  However your patch mixes both concepts by
> using pg_current_wal_lsn.

Sorry, I realize I was not clear enough about implementation details.
My latest patch does **not** introduce regression for existing tools. If you do
not pass any parameter, the behavior is the same, only one column:

  # primary
  $ cat < So we could do more with the introduction of five new functions which 
> allow to grab the LSN and the TLI in use for replay, received, insert,
> write and flush positions:
> - pg_current_wal_flush_info
> - pg_current_wal_insert_info
> - pg_current_wal_info
> - pg_last_wal_receive_info
> - pg_last_wal_replay_info

I could go this way if you prefer, maybe using _tli as suffix instead of _info
as this is the only new info added. I think it feels redundant with original
funcs, but it might be the simplest solution.

> I would be actually tempted to do the following: one single SRF
> function, say pg_wal_info which takes a text argument in input with
> the following values: flush, write, insert, receive, replay.  Thinking
> more about it that would be rather neat, and more extensible than the
> rest discussed until now.  See for example PostgresNode::lsn.

I'll answer in your other mail that summary other possibilities.

Thanks!




Re: Fetching timeline during recovery

2019-12-12 Thread Michael Paquier
On Wed, Dec 11, 2019 at 10:45:25AM -0500, Stephen Frost wrote:
> I'm confused- wouldn't the above approach be a function that's returning
> only one row, if you had a bunch of columns and then had NULL values for
> those cases that didn't apply..?  Or, if you were thinking about the SRF
> approach that you suggested, you could use a WHERE clause to make it
> only one row...  Though I can see how it's nicer to just have one row in
> some cases which is why I was suggesting the "bunch of columns"
> approach.

Oh, sorry.  I see the confusion now and that's my fault.  In
https://www.postgresql.org/message-id/20191211052002.gk72...@paquier.xyz
I mentioned a SRF function which takes an input argument, but that
makes no sense.  What I would prefer having is just having one
function, returning one row (LSN, TLI), using in input one argument to
extract the WAL information the caller wants with five possible cases
(write, insert, flush, receive, replay).

Then, what you are referring to is one function which returns all
(LSN,TLI) for the five cases (write, insert, etc.), so it would return
one row with 10 columns, with NULL mapping to the values which have no
meaning (like replay on a primary).

And on top of that we have a third possibility: one SRF function
returning 5 rows with three attributes (mode, LSN, TLI), where mode
corresponds to one value in the set {write, insert, etc.}.

I actually prefer the first one, and you mentioned the second.  But
there could be a point in doing the third one.  An advantage of the
second and third ones is that you may be able to get a consistent view
of all the data, but it means holding locks to look at the values a
bit longer.  Let's see what others think.
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-12-11 Thread Stephen Frost
Greetings,

* Michael Paquier (mich...@paquier.xyz) wrote:
> On Wed, Dec 11, 2019 at 10:16:29AM -0500, Stephen Frost wrote:
> > I've not followed this discussion very closely but I agree entirely that
> > it's really nice to have the timeline be able to be queried in a more
> > timely manner than asking through pg_control_checkpoint() gives you.
> > 
> > I'm not sure about adding a text argument to such a function though, I
> > would think you'd either have multiple rows if it's an SRF that gives
> > you the information on each row and allows a user to filter with a WHERE
> > clause, or do something like what pg_stat_replication has and just have
> > a bunch of columns.
> 
> With a NULL added for the values which cannot be defined then, like
> trying to use the function on a primary for the fields which can only
> show up at recovery?  

Sure, the function would only return those values that make sense for
the state that the system is in.

> That would be possible, still my heart tells me
> that a function returning one row is a more natural approach for
> this stuff.  I may be under too much used to what we have in the TAP
> tests though.

I'm confused- wouldn't the above approach be a function that's returning
only one row, if you had a bunch of columns and then had NULL values for
those cases that didn't apply..?  Or, if you were thinking about the SRF
approach that you suggested, you could use a WHERE clause to make it
only one row...  Though I can see how it's nicer to just have one row in
some cases which is why I was suggesting the "bunch of columns"
approach.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-12-11 Thread Michael Paquier
On Wed, Dec 11, 2019 at 10:16:29AM -0500, Stephen Frost wrote:
> I've not followed this discussion very closely but I agree entirely that
> it's really nice to have the timeline be able to be queried in a more
> timely manner than asking through pg_control_checkpoint() gives you.
> 
> I'm not sure about adding a text argument to such a function though, I
> would think you'd either have multiple rows if it's an SRF that gives
> you the information on each row and allows a user to filter with a WHERE
> clause, or do something like what pg_stat_replication has and just have
> a bunch of columns.

With a NULL added for the values which cannot be defined then, like
trying to use the function on a primary for the fields which can only
show up at recovery?  That would be possible, still my heart tells me
that a function returning one row is a more natural approach for
this stuff.  I may be under too much used to what we have in the TAP
tests though.
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-12-11 Thread Stephen Frost
Greetings,

* Michael Paquier (mich...@paquier.xyz) wrote:
> I would be actually tempted to do the following: one single SRF
> function, say pg_wal_info which takes a text argument in input with
> the following values: flush, write, insert, receive, replay.  Thinking
> more about it that would be rather neat, and more extensible than the
> rest discussed until now.  See for example PostgresNode::lsn.

I've not followed this discussion very closely but I agree entirely that
it's really nice to have the timeline be able to be queried in a more
timely manner than asking through pg_control_checkpoint() gives you.

I'm not sure about adding a text argument to such a function though, I
would think you'd either have multiple rows if it's an SRF that gives
you the information on each row and allows a user to filter with a WHERE
clause, or do something like what pg_stat_replication has and just have
a bunch of columns.

Given that we've already gone with the "bunch of columns" approach
elsewhere, it seems like that approach would be more consistent.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-12-10 Thread Michael Paquier
On Thu, Sep 26, 2019 at 07:20:46PM +0200, Jehan-Guillaume de Rorthais wrote:
> If this solution is accepted, some other function of the same family might be
> good candidates as well, for the sake of homogeneity:
> 
> * pg_current_wal_insert_lsn
> * pg_current_wal_flush_lsn
> * pg_last_wal_replay_lsn
> 
> However, I'm not sure how useful this would be.
> 
> Thanks again for your time, suggestions and review!

+{ oid => '3435', descr => 'current wal flush location',
+  proname => 'pg_last_wal_receive_lsn', provolatile => 'v',
proisstrict => 'f',
This description is incorrect.

And please use OIDs in the range of 8000~ for patches in
development.  You could just use src/include/catalog/unused_oids which
would point out a random range.

+   if (recptr == 0) {
+   nulls[0] = 1;
+   nulls[1] = 1;
+   }
The indendation of the code is incorrect, these should use actual
booleans and recptr should be InvalidXLogRecPtr (note also the
existence of the macro XLogRecPtrIsInvalid).  Just for the style.

As said in the last emails exchanged on this thread, I don't see how
you cannot use multiple functions which have different meaning
depending on if the cluster is a primary or a standby knowing that we
have two different concepts of WAL when at recovery: the received
LSN and the replayed LSN, and three concepts for primaries (insert,
current, flush).  I agree as well with the point of Fujii-san about
not returning the TLI and the LSN across different functions as this
opens the door for a risk of inconsistency for the data received by
the client.

+ * When the first parameter (variable 'with_tli') is true, returns the current
+ * timeline as second field. If false, second field is null.
I don't see much the point of having this input parameter which
determines the NULL-ness of one of the result columns, and I think
that you had better use a completely different function name for each
one of them instead of enforcing the functions.  Let's remember that a
lot of tools use the existing functions directly in the SELECT clause
for LSN calculations, which is just a 64-bit integer *without* a
timeline assigned to it.  However your patch mixes both concepts by
using pg_current_wal_lsn.

So we could do more with the introduction of five new functions which 
allow to grab the LSN and the TLI in use for replay, received, insert,
write and flush positions:
- pg_current_wal_flush_info
- pg_current_wal_insert_info
- pg_current_wal_info
- pg_last_wal_receive_info
- pg_last_wal_replay_info

I would be actually tempted to do the following: one single SRF
function, say pg_wal_info which takes a text argument in input with
the following values: flush, write, insert, receive, replay.  Thinking
more about it that would be rather neat, and more extensible than the
rest discussed until now.  See for example PostgresNode::lsn.
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-09-26 Thread Jehan-Guillaume de Rorthais
On Mon, 9 Sep 2019 19:44:10 +0900
Fujii Masao  wrote:

> On Sat, Sep 7, 2019 at 12:06 AM Jehan-Guillaume de Rorthais
>  wrote:
> >
> > On Wed, 4 Sep 2019 00:32:03 +0900
> > Fujii Masao  wrote:
> >  
[...]
> Thanks for updating the patch!

Thank you for your review!

Please find in attachment a new version of the patch.

  0001-v5-Add-facilities-to-fetch-real-timeline-from-SQL.patch

> Should we add regression tests for these functions? For example,
> what about using these functions to check the timeline switch case,
> in src/test/recovery/t/004_timeline_switch.pl?

Indeed, I added 6 tests to this file.

> [...] 

Thank you for all other suggestions. They all make sense for v4 of the patch.
However, I removed pg_current_wal_tl() and pg_last_wal_received_tl() to explore
a patch paying attention to your next comment.

> I'm just imaging that some users want to use pg_last_wal_receive_lsn() and
> pg_last_wal_receive_tli() together to, e.g., get the name of WAL file received
> last. But there can be a corner case where the return values of
> pg_last_wal_receive_lsn() and of pg_last_wal_receive_tli() are inconsistent.
> This can happen because those values are NOT gotten within single lock.
> That is, each function takes each lock to get each value.
> 
> So, to avoid that corner case and get consistent WAL file name,
> we might want to have the function that gets both LSN and
> timeline ID of the last received WAL record within single lock
> (i.e., just uses GetWalRcvWriteRecPtr()) and returns them.
> Thought?

You are right.

SO either I add some new functions or I overload the existing ones.

I was not convinced to add two new functions very close to pg_current_wal_lsn
and pg_last_wal_receive_lsn but with a slightly different name (eg. suffixed
with _tli?).

I choose to overload pg_current_wal_lsn and pg_last_wal_receive_lsn with
pg_current_wal_lsn(with_tli bool) and pg_last_wal_receive_lsn(with_tli bool).

Both function returns the record (lsn pg_lsn,timeline int4). If with_tli is
NULL or false, the timeline field is NULL.

Documentation is updated to reflect this.

Thoughts?

If this solution is accepted, some other function of the same family might be
good candidates as well, for the sake of homogeneity:

* pg_current_wal_insert_lsn
* pg_current_wal_flush_lsn
* pg_last_wal_replay_lsn

However, I'm not sure how useful this would be.

Thanks again for your time, suggestions and review!

Regards,
>From bce1c4353ebea12d7f5f19bb18b1ea00acb37085 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Thu, 25 Jul 2019 19:36:40 +0200
Subject: [PATCH] Add facilities to fetch real timeline from SQL

The only way to fetch the timeline from SQL was to query
pg_control_checkpoint() which read the controldata file from disk.

This is fine on a primay cluster, because its controldata file is
synched to disk during important operation, eg. promotion. However, on
a standby, the controldata file is only synched during restartpoint. This
means the timeline read from there can be wrong during several minutes
after a timeline change with recoevery_target_timeline set to latest.

This patch overload pg_current_wal_lsn and pg_last_wal_received_lsn with
new functions taking a boolean as parameter. If true, they will report both
the requested LSN and its timeline.
---
 doc/src/sgml/func.sgml |  41 +-
 src/backend/access/transam/xlogfuncs.c | 145 +
 src/include/catalog/pg_proc.dat|  13 ++
 src/test/recovery/t/004_timeline_switch.pl |  37 +-
 4 files changed, 234 insertions(+), 2 deletions(-)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index cc3041f637..853e344b19 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -20338,6 +20338,13 @@ SELECT set_config('log_statement_stats', 'off', false);
pg_lsn
Get current write-ahead log write location
   
+  
+   
+pg_current_wal_lsn(with_tli boolean)
+
+   setof record
+   Get current write-ahead log write location and timeline f
+  
   

 pg_start_backup(label text , fast boolean , exclusive boolean )
@@ -20486,7 +20493,15 @@ postgres=# select pg_start_backup('label_goes_here');
 

 pg_current_wal_lsn displays the current write-ahead log write
-location in the same format used by the above functions.  Similarly,
+location in the same format used by the above functions.
+There is an optional parameter of type boolean.  If true,
+the result includes the current timeline as second parameter. Do not set this
+parameter, even to false, if you only need the current write-ahead
+log write location as the result would include a useless NULL
+second field.
+   
+
+   Similarly,
 pg_current_wal_insert_lsn displays the current write-ahead log
 insertion location and pg_current_wal_flush_lsn displays the
 current write-ahead log flush location. The insertion location is the 

Re: Fetching timeline during recovery

2019-09-09 Thread Fujii Masao
On Sat, Sep 7, 2019 at 12:06 AM Jehan-Guillaume de Rorthais
 wrote:
>
> On Wed, 4 Sep 2019 00:32:03 +0900
> Fujii Masao  wrote:
>
> > On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais
> >  wrote:
> > >
> > > On Fri, 26 Jul 2019 18:22:25 +0200
> > > Jehan-Guillaume de Rorthais  wrote:
> > >
> > > > On Fri, 26 Jul 2019 10:02:58 +0200
> > > > Jehan-Guillaume de Rorthais  wrote:
> [...]
> > > > Please, find in attachment a new version of the patch. It now creates 
> > > > two
> > > > new fonctions:
> > > >
> > > >   pg_current_wal_tl()
> > > >   pg_last_wal_received_tl()
> > >
> > > I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().
> > > Please find the corrected patch in attachment:
> > > 0001-v3-Add-functions-to-get-timeline.patch
> >
> > Thanks for the patch! Here are some comments from me.
>
> Thank you for your review!
>
> Please, find in attachment the v4 of the patch:
> 0001-v4-Add-functions-to-get-timeline.patch

Thanks for updating the patch!

Should we add regression tests for these functions? For example,
what about using these functions to check the timeline switch case,
in src/test/recovery/t/004_timeline_switch.pl?

>
> Answers bellow.
>
> > You need to write the documentation explaining the functions
> > that you're thinking to add.
>
> Done.

Thanks!

+   Get current write-ahead log timeline

I'm not sure if "current write-ahead log timeline" is proper word.
"timeline ID of current write-ahead log" is more appropriate?

+   int
+   Get last write-ahead log timeline received and sync to disk by
+streaming replication.

Same as above. I think that "timeline ID of last write-ahead log received
and sync to disk ..." is better here.

Like pg_last_wal_receive_lsn(), something like "If recovery has
completed this will remain static at the value of the last WAL
record received and synced to disk during recovery.
If streaming replication is disabled, or if it has not yet started,
the function returns NULL." should be in this description?

>
> > +/*
> > + * Returns the current timeline on a production cluster
> > + */
> > +Datum
> > +pg_current_wal_tl(PG_FUNCTION_ARGS)

I think that "tl" in the function name should be "tli". "tli" is used
used for other functions and views related to timeline, e.g.,
pg_stat_wal_receiver.received_tli. Thought?

> >
> > The timeline ID that this function returns seems almost
> > the same as pg_control_checkpoint().timeline_id,
> > when the server is in production. So I'm not sure
> > if it's worth adding that new function.
>
> pg_control_checkpoint().timeline_id is read from the controldata file on disk
> which is asynchronously updated with the real status of the local cluster.
> Right after a promotion, fetching the TL from pg_control_checkpoint() is wrong
> and can cause race conditions on client side.

Understood.

> > The timeline ID that this function returns is the same as
> > pg_stat_wal_receiver.received_tli while walreceiver is running.
> > But when walreceiver is not running, pg_stat_wal_receiver returns
> > no record, and pg_last_wal_received_tl() would be useful to
> > get the timeline only in this case. Is this my understanding right?
>
> Exactly.

I'm just imaging that some users want to use pg_last_wal_receive_lsn() and
pg_last_wal_receive_tli() together to, e.g., get the name of WAL file received
last. But there can be a corner case where the return values of
pg_last_wal_receive_lsn() and of pg_last_wal_receive_tli() are inconsistent.
This can happen because those values are NOT gotten within single lock.
That is, each function takes each lock to get each value.

So, to avoid that corner case and get consistent WAL file name,
we might want to have the function that gets both LSN and
timeline ID of the last received WAL record within single lock
(i.e., just uses GetWalRcvWriteRecPtr()) and returns them.
Thought?

Regards,

-- 
Fujii Masao




Re: Fetching timeline during recovery

2019-09-06 Thread Jehan-Guillaume de Rorthais
On Wed, 4 Sep 2019 00:32:03 +0900
Fujii Masao  wrote:

> On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais
>  wrote:
> >
> > On Fri, 26 Jul 2019 18:22:25 +0200
> > Jehan-Guillaume de Rorthais  wrote:
> >  
> > > On Fri, 26 Jul 2019 10:02:58 +0200
> > > Jehan-Guillaume de Rorthais  wrote:
[...]
> > > Please, find in attachment a new version of the patch. It now creates two
> > > new fonctions:
> > >
> > >   pg_current_wal_tl()
> > >   pg_last_wal_received_tl()  
> >
> > I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().
> > Please find the corrected patch in attachment:
> > 0001-v3-Add-functions-to-get-timeline.patch  
> 
> Thanks for the patch! Here are some comments from me.

Thank you for your review!

Please, find in attachment the v4 of the patch:
0001-v4-Add-functions-to-get-timeline.patch

Answers bellow.

> You need to write the documentation explaining the functions
> that you're thinking to add.

Done.

> +/*
> + * Returns the current timeline on a production cluster
> + */
> +Datum
> +pg_current_wal_tl(PG_FUNCTION_ARGS)
> 
> The timeline ID that this function returns seems almost
> the same as pg_control_checkpoint().timeline_id,
> when the server is in production. So I'm not sure
> if it's worth adding that new function.

pg_control_checkpoint().timeline_id is read from the controldata file on disk
which is asynchronously updated with the real status of the local cluster.
Right after a promotion, fetching the TL from pg_control_checkpoint() is wrong
and can cause race conditions on client side.

This is the main reason I am working on this patch.

> + currentTL = GetCurrentTimeLine();
> +
> + PG_RETURN_INT32(currentTL);
> 
> Is GetCurrentTimeLine() really necessary? Seems ThisTimeLineID can be
> returned directly since it indicates the current timeline ID in production.

Indeed. I might have over-focused on memory state. ThisTimeLineID seems to be
updated soon enough during the promotion, in fact, even before
XLogCtl->ThisTimeLineID:

if (ArchiveRecoveryRequested)
{
[...]
ThisTimeLineID = findNewestTimeLine(recoveryTargetTLI) + 1;
[...]
}

/* Save the selected TimeLineID in shared memory, too */
XLogCtl->ThisTimeLineID = ThisTimeLineID;

> +pg_last_wal_received_tl(PG_FUNCTION_ARGS)
> +{
> + TimeLineID lastReceivedTL;
> + WalRcvData *walrcv = WalRcv;
> +
> + SpinLockAcquire(>mutex);
> + lastReceivedTL = walrcv->receivedTLI;
> + SpinLockRelease(>mutex);
> 
> I think that it's smarter to use GetWalRcvWriteRecPtr() to
> get the last received TLI, like pg_last_wal_receive_lsn() does.

I has been hesitant between the current implementation and using
GetWalRcvWriteRecPtr(). I choose the current implementation to avoid unnecessary
operations during the spinlock and make it as fast as possible.

However, maybe I'm scratching nothing or just dust here in comparison to
calling GetWalRcvWriteRecPtr() and avoiding minor code duplication.

Being hesitant, v4 of the patch use GetWalRcvWriteRecPtr() as suggested.

> The timeline ID that this function returns is the same as
> pg_stat_wal_receiver.received_tli while walreceiver is running.
> But when walreceiver is not running, pg_stat_wal_receiver returns
> no record, and pg_last_wal_received_tl() would be useful to
> get the timeline only in this case. Is this my understanding right?

Exactly.
 
> > Also, TimeLineID is declared as a uint32. So why do we use
> > PG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?
> > See eg. in pg_stat_get_wal_receiver().  
> 
> pg_stat_wal_receiver.received_tli is declared as integer.

Oh, right. Thank you.

Thanks,
>From 317df99449a9cabff2af53407abfe11be7895d82 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Thu, 25 Jul 2019 19:36:40 +0200
Subject: [PATCH] Add functions to get timeline

pg_current_wal_tl() returns the current timeline of a cluster in production.

pg_last_wal_received_tl() returns the timeline of the last xlog record
flushed to disk.
---
 doc/src/sgml/func.sgml | 22 ++
 src/backend/access/transam/xlogfuncs.c | 16 
 src/backend/replication/walreceiver.c  | 16 
 src/include/catalog/pg_proc.dat| 12 
 4 files changed, 66 insertions(+)

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index c878a0ba4d..b0adc21883 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -19920,6 +19920,9 @@ SELECT set_config('log_statement_stats', 'off', false);

 pg_current_wal_lsn

+   
+pg_current_wal_tl
+   

 pg_start_backup

@@ -19992,6 +19995,13 @@ SELECT set_config('log_statement_stats', 'off', false);
pg_lsn
Get current write-ahead log write location
   
+  
+   
+pg_current_wal_tl()
+
+   int
+   Get current write-ahead log timeline
+  
   

 pg_start_backup(label 

Re: Fetching timeline during recovery

2019-09-03 Thread Fujii Masao
On Mon, Jul 29, 2019 at 7:26 PM Jehan-Guillaume de Rorthais
 wrote:
>
> On Fri, 26 Jul 2019 18:22:25 +0200
> Jehan-Guillaume de Rorthais  wrote:
>
> > On Fri, 26 Jul 2019 10:02:58 +0200
> > Jehan-Guillaume de Rorthais  wrote:
> >
> > > On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)
> > > Kyotaro Horiguchi  wrote:
> > [...]
> > > > We have an LSN reporting function each for several objectives.
> > > >
> > > >  pg_current_wal_lsn
> > > >  pg_current_wal_insert_lsn
> > > >  pg_current_wal_flush_lsn
> > > >  pg_last_wal_receive_lsn
> > > >  pg_last_wal_replay_lsn
> > >
> > > Yes. In fact, my current implementation might be split as:
> > >
> > >   pg_current_wal_tl: returns TL on a production cluster
> > >   pg_last_wal_received_tl: returns last received TL on a standby
> > >
> > > If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl 
> > > and
> > > *flush_tl would be useful as a cluster in production is not supposed to
> > > change its timeline during its lifetime.
> > >
> > > > But, I'm not sure just adding further pg_last_*_timeline() to
> > > > this list is a good thing..
> > >
> > > I think this is a much better idea than mixing different case (production
> > > and standby) in the same function as I did. Moreover, it's much more
> > > coherent with other existing functions.
> >
> > Please, find in attachment a new version of the patch. It now creates two 
> > new
> > fonctions:
> >
> >   pg_current_wal_tl()
> >   pg_last_wal_received_tl()
>
> I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().
> Please find the corrected patch in attachment:
> 0001-v3-Add-functions-to-get-timeline.patch

Thanks for the patch! Here are some comments from me.

You need to write the documentation explaining the functions
that you're thinking to add.

+/*
+ * Returns the current timeline on a production cluster
+ */
+Datum
+pg_current_wal_tl(PG_FUNCTION_ARGS)

The timeline ID that this function returns seems almost
the same as pg_control_checkpoint().timeline_id,
when the server is in production. So I'm not sure
if it's worth adding that new function.

+ currentTL = GetCurrentTimeLine();
+
+ PG_RETURN_INT32(currentTL);

Is GetCurrentTimeLine() really necessary? Seems ThisTimeLineID can be
returned directly since it indicates the current timeline ID in production.

+pg_last_wal_received_tl(PG_FUNCTION_ARGS)
+{
+ TimeLineID lastReceivedTL;
+ WalRcvData *walrcv = WalRcv;
+
+ SpinLockAcquire(>mutex);
+ lastReceivedTL = walrcv->receivedTLI;
+ SpinLockRelease(>mutex);

I think that it's smarter to use GetWalRcvWriteRecPtr() to
get the last received TLI, like pg_last_wal_receive_lsn() does.

The timeline ID that this function returns is the same as
pg_stat_wal_receiver.received_tli while walreceiver is running.
But when walreceiver is not running, pg_stat_wal_receiver returns
no record, and pg_last_wal_received_tl() would be useful to
get the timeline only in this case. Is this my understanding right?

> Also, TimeLineID is declared as a uint32. So why do we use
> PG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?
> See eg. in pg_stat_get_wal_receiver().

pg_stat_wal_receiver.received_tli is declared as integer.

Regards,

-- 
Fujii Masao




Re: Fetching timeline during recovery

2019-07-29 Thread Jehan-Guillaume de Rorthais
On Fri, 26 Jul 2019 18:22:25 +0200
Jehan-Guillaume de Rorthais  wrote:

> On Fri, 26 Jul 2019 10:02:58 +0200
> Jehan-Guillaume de Rorthais  wrote:
> 
> > On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)
> > Kyotaro Horiguchi  wrote:
> [...]
> > > We have an LSN reporting function each for several objectives.
> > > 
> > >  pg_current_wal_lsn
> > >  pg_current_wal_insert_lsn
> > >  pg_current_wal_flush_lsn
> > >  pg_last_wal_receive_lsn
> > >  pg_last_wal_replay_lsn  
> > 
> > Yes. In fact, my current implementation might be split as:
> > 
> >   pg_current_wal_tl: returns TL on a production cluster
> >   pg_last_wal_received_tl: returns last received TL on a standby
> > 
> > If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and
> > *flush_tl would be useful as a cluster in production is not supposed to
> > change its timeline during its lifetime.
> > 
> > > But, I'm not sure just adding further pg_last_*_timeline() to
> > > this list is a good thing..  
> > 
> > I think this is a much better idea than mixing different case (production
> > and standby) in the same function as I did. Moreover, it's much more
> > coherent with other existing functions.
> 
> Please, find in attachment a new version of the patch. It now creates two new
> fonctions: 
> 
>   pg_current_wal_tl()
>   pg_last_wal_received_tl()

I just found I forgot to use PG_RETURN_INT32 in pg_last_wal_received_tl().
Please find the corrected patch in attachment:
0001-v3-Add-functions-to-get-timeline.patch

Also, TimeLineID is declared as a uint32. So why do we use
PG_RETURN_INT32/Int32GetDatum to return a timeline and not PG_RETURN_UINT32?
See eg. in pg_stat_get_wal_receiver().

Regards,
>From 031d60de3e4239c83554c89c0c382c6390545434 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Thu, 25 Jul 2019 19:36:40 +0200
Subject: [PATCH] Add functions to get timeline

pg_current_wal_tl() returns the current timeline of a cluster in production.

pg_last_wal_received_tl() returns the timeline of the last xlog record
flushed to disk.
---
 src/backend/access/transam/xlog.c  | 17 +
 src/backend/access/transam/xlogfuncs.c | 20 
 src/backend/replication/walreceiver.c  | 19 +++
 src/include/access/xlog.h  |  1 +
 src/include/catalog/pg_proc.dat| 12 
 5 files changed, 69 insertions(+)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index da3d250986..fd30c88534 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -12243,3 +12243,20 @@ XLogRequestWalReceiverReply(void)
 {
 	doRequestWalReceiverReply = true;
 }
+
+/*
+ * Returns current active timeline.
+ */
+TimeLineID
+GetCurrentTimeLine(void)
+{
+	TimeLineID	localTimeLineID;
+
+	SpinLockAcquire(>info_lck);
+
+	localTimeLineID = XLogCtl->ThisTimeLineID;
+
+	SpinLockRelease(>info_lck);
+
+	return localTimeLineID;
+}
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index b35043bf71..ae877be351 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -776,3 +776,23 @@ pg_promote(PG_FUNCTION_ARGS)
 			(errmsg("server did not promote within %d seconds", wait_seconds)));
 	PG_RETURN_BOOL(false);
 }
+
+/*
+ * Returns the current timeline on a production cluster
+ */
+Datum
+pg_current_wal_tl(PG_FUNCTION_ARGS)
+{
+	TimeLineID currentTL;
+
+	if (RecoveryInProgress())
+		ereport(ERROR,
+(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("recovery is in progress"),
+ errhint("%s cannot be executed during recovery.",
+		 "pg_current_wal_tl()")));
+
+	currentTL = GetCurrentTimeLine();
+
+	PG_RETURN_INT32(currentTL);
+}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 6abc780778..9bffd822ff 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -1454,3 +1454,22 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	/* Returns the record as Datum */
 	PG_RETURN_DATUM(HeapTupleGetDatum(heap_form_tuple(tupdesc, values, nulls)));
 }
+
+/*
+ * Returns the timeline of the last xlog record flushed to WAL
+ */
+Datum
+pg_last_wal_received_tl(PG_FUNCTION_ARGS)
+{
+	TimeLineID	lastReceivedTL;
+	WalRcvData *walrcv = WalRcv;
+
+	SpinLockAcquire(>mutex);
+	lastReceivedTL = walrcv->receivedTLI;
+	SpinLockRelease(>mutex);
+
+	if (!lastReceivedTL)
+		PG_RETURN_NULL();
+
+	PG_RETURN_INT32(lastReceivedTL);
+}
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index d519252aad..f0502c0b41 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -313,6 +313,7 @@ extern XLogRecPtr GetInsertRecPtr(void);
 extern XLogRecPtr GetFlushRecPtr(void);
 extern XLogRecPtr GetLastImportantRecPtr(void);
 extern void RemovePromoteSignalFiles(void);
+extern TimeLineID GetCurrentTimeLine(void);
 
 extern bool CheckPromoteSignal(void);

Re: Fetching timeline during recovery

2019-07-26 Thread Jehan-Guillaume de Rorthais
On Fri, 26 Jul 2019 10:02:58 +0200
Jehan-Guillaume de Rorthais  wrote:

> On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)
> Kyotaro Horiguchi  wrote:
[...]
> > We have an LSN reporting function each for several objectives.
> > 
> >  pg_current_wal_lsn
> >  pg_current_wal_insert_lsn
> >  pg_current_wal_flush_lsn
> >  pg_last_wal_receive_lsn
> >  pg_last_wal_replay_lsn  
> 
> Yes. In fact, my current implementation might be split as:
> 
>   pg_current_wal_tl: returns TL on a production cluster
>   pg_last_wal_received_tl: returns last received TL on a standby
> 
> If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and
> *flush_tl would be useful as a cluster in production is not supposed to
> change its timeline during its lifetime.
> 
> > But, I'm not sure just adding further pg_last_*_timeline() to
> > this list is a good thing..  
> 
> I think this is a much better idea than mixing different case (production and
> standby) in the same function as I did. Moreover, it's much more coherent with
> other existing functions.

Please, find in attachment a new version of the patch. It now creates two new
fonctions: 

  pg_current_wal_tl()
  pg_last_wal_received_tl()

Regards,
>From 1e21fb7203e66ed514129d41c9bbf947c5284d7b Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Thu, 25 Jul 2019 19:36:40 +0200
Subject: [PATCH] Add functions to get timeline

pg_current_wal_tl() returns the current timeline of a cluster in production.

pg_last_wal_received_tl() returns the timeline of the last xlog record
flushed to disk.
---
 src/backend/access/transam/xlog.c  | 17 +
 src/backend/access/transam/xlogfuncs.c | 20 
 src/backend/replication/walreceiver.c  | 19 +++
 src/include/access/xlog.h  |  1 +
 src/include/catalog/pg_proc.dat| 12 
 5 files changed, 69 insertions(+)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index da3d250986..fd30c88534 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -12243,3 +12243,20 @@ XLogRequestWalReceiverReply(void)
 {
 	doRequestWalReceiverReply = true;
 }
+
+/*
+ * Returns current active timeline.
+ */
+TimeLineID
+GetCurrentTimeLine(void)
+{
+	TimeLineID	localTimeLineID;
+
+	SpinLockAcquire(>info_lck);
+
+	localTimeLineID = XLogCtl->ThisTimeLineID;
+
+	SpinLockRelease(>info_lck);
+
+	return localTimeLineID;
+}
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index b35043bf71..ae877be351 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -776,3 +776,23 @@ pg_promote(PG_FUNCTION_ARGS)
 			(errmsg("server did not promote within %d seconds", wait_seconds)));
 	PG_RETURN_BOOL(false);
 }
+
+/*
+ * Returns the current timeline on a production cluster
+ */
+Datum
+pg_current_wal_tl(PG_FUNCTION_ARGS)
+{
+	TimeLineID currentTL;
+
+	if (RecoveryInProgress())
+		ereport(ERROR,
+(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
+ errmsg("recovery is in progress"),
+ errhint("%s cannot be executed during recovery.",
+		 "pg_current_wal_tl()")));
+
+	currentTL = GetCurrentTimeLine();
+
+	PG_RETURN_INT32(currentTL);
+}
diff --git a/src/backend/replication/walreceiver.c b/src/backend/replication/walreceiver.c
index 6abc780778..97d1c900c7 100644
--- a/src/backend/replication/walreceiver.c
+++ b/src/backend/replication/walreceiver.c
@@ -1454,3 +1454,22 @@ pg_stat_get_wal_receiver(PG_FUNCTION_ARGS)
 	/* Returns the record as Datum */
 	PG_RETURN_DATUM(HeapTupleGetDatum(heap_form_tuple(tupdesc, values, nulls)));
 }
+
+/*
+ * Returns the timeline of the last xlog record flushed to WAL
+ */
+Datum
+pg_last_wal_received_tl(PG_FUNCTION_ARGS)
+{
+	TimeLineID	localTimeLineID;
+	WalRcvData *walrcv = WalRcv;
+
+	SpinLockAcquire(>mutex);
+	localTimeLineID = walrcv->receivedTLI;
+	SpinLockRelease(>mutex);
+
+	if (!localTimeLineID)
+		PG_RETURN_NULL();
+
+	return localTimeLineID;
+}
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index d519252aad..f0502c0b41 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -313,6 +313,7 @@ extern XLogRecPtr GetInsertRecPtr(void);
 extern XLogRecPtr GetFlushRecPtr(void);
 extern XLogRecPtr GetLastImportantRecPtr(void);
 extern void RemovePromoteSignalFiles(void);
+extern TimeLineID GetCurrentTimeLine(void);
 
 extern bool CheckPromoteSignal(void);
 extern void WakeupRecovery(void);
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0902dce5f1..d7ec6ea100 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -6006,6 +6006,18 @@
 { oid => '2851', descr => 'wal filename, given a wal location',
   proname => 'pg_walfile_name', prorettype => 'text', proargtypes => 'pg_lsn',
   prosrc => 'pg_walfile_name' },
+{ oid => '3434',
+  descr => 'current timeline',
+  proname 

Re: Fetching timeline during recovery

2019-07-26 Thread Jehan-Guillaume de Rorthais
On Fri, 26 Jul 2019 16:49:53 +0900 (Tokyo Standard Time)
Kyotaro Horiguchi  wrote:

> Hi.
> 
> At Thu, 25 Jul 2019 19:38:08 +0200, Jehan-Guillaume de Rorthais
>  wrote in <20190725193808.1648ddc8@firost>
> > On Wed, 24 Jul 2019 14:33:27 +0200
> > Jehan-Guillaume de Rorthais  wrote:
> >   
> > > On Wed, 24 Jul 2019 09:49:05 +0900
> > > Michael Paquier  wrote:
> > >   
> > > > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais
> > > > wrote:
> > [...]  
> > > > I think that there are arguments for being more flexible with it, and
> > > > perhaps have a system-level view to be able to look at some of its
> > > > fields.
> > > 
> > > Great idea. I'll give it a try to keep the discussion on.  
> > 
> > After some thinking, I did not find enough data to expose to justify the
> > creation a system-level view. As I just need the current timeline I
> > wrote "pg_current_timeline()". Please, find the patch in attachment.
> > 
> > The current behavior is quite simple: 
> > * if the cluster is in production, return ThisTimeLineID
> > * else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)
> > 
> > This is really naive implementation. We should probably add some code around
> > the startup process to gather and share general recovery stats. This would
> > allow to fetch eg. the current recovery method, latest xlog file name
> > restored from archives or streaming, its timeline, etc.
> > 
> > Any thoughts?  
> 
> If replay is delayed behind timeline switch point, replay-LSN and
> receive/write/flush LSNs are on different timelines.  When
> replica have not reached the new timeline to which alredy
> received file belongs, the fucntion returns wrong file name,
> specifically a name consisting of the latest segment number and
> the older timeline where the segment doesn't belong to.

Indeed.

> We have an LSN reporting function each for several objectives.
> 
>  pg_current_wal_lsn
>  pg_current_wal_insert_lsn
>  pg_current_wal_flush_lsn
>  pg_last_wal_receive_lsn
>  pg_last_wal_replay_lsn

Yes. In fact, my current implementation might be split as:

  pg_current_wal_tl: returns TL on a production cluster
  pg_last_wal_received_tl: returns last received TL on a standby

If useful, I could add pg_last_wal_replayed_tl. I don't think *insert_tl and
*flush_tl would be useful as a cluster in production is not supposed to
change its timeline during its lifetime.

> But, I'm not sure just adding further pg_last_*_timeline() to
> this list is a good thing..

I think this is a much better idea than mixing different case (production and
standby) in the same function as I did. Moreover, it's much more coherent with
other existing functions.

> The function returns NULL for NULL input (STRICT behavior) but
> returns (NULL, NULL) for undefined timeline. I don't think the
> differene is meaningful.

Unless I'm missing something, nothing
returns "(NULL, NULL)" in 0001-v1-Add-function-pg_current_timeline.patch.

Thank you for your feedback!




Re: Fetching timeline during recovery

2019-07-26 Thread Kyotaro Horiguchi
Hi.

At Thu, 25 Jul 2019 19:38:08 +0200, Jehan-Guillaume de Rorthais 
 wrote in <20190725193808.1648ddc8@firost>
> On Wed, 24 Jul 2019 14:33:27 +0200
> Jehan-Guillaume de Rorthais  wrote:
> 
> > On Wed, 24 Jul 2019 09:49:05 +0900
> > Michael Paquier  wrote:
> > 
> > > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais
> > > wrote:  
> [...]
> > > I think that there are arguments for being more flexible with it, and
> > > perhaps have a system-level view to be able to look at some of its 
> > > fields.  
> > 
> > Great idea. I'll give it a try to keep the discussion on.
> 
> After some thinking, I did not find enough data to expose to justify the
> creation a system-level view. As I just need the current timeline I
> wrote "pg_current_timeline()". Please, find the patch in attachment.
> 
> The current behavior is quite simple: 
> * if the cluster is in production, return ThisTimeLineID
> * else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)
> 
> This is really naive implementation. We should probably add some code around
> the startup process to gather and share general recovery stats. This would
> allow to fetch eg. the current recovery method, latest xlog file name restored
> from archives or streaming, its timeline, etc.
> 
> Any thoughts?

If replay is delayed behind timeline switch point, replay-LSN and
receive/write/flush LSNs are on different timelines.  When
replica have not reached the new timeline to which alredy
received file belongs, the fucntion returns wrong file name,
specifically a name consisting of the latest segment number and
the older timeline where the segment doesn't belong to.

We have an LSN reporting function each for several objectives.

 pg_current_wal_lsn
 pg_current_wal_insert_lsn
 pg_current_wal_flush_lsn
 pg_last_wal_receive_lsn
 pg_last_wal_replay_lsn

But, I'm not sure just adding further pg_last_*_timeline() to
this list is a good thing..


The function returns NULL for NULL input (STRICT behavior) but
returns (NULL, NULL) for undefined timeline. I don't think the
differene is meaningful.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Fetching timeline during recovery

2019-07-25 Thread Jehan-Guillaume de Rorthais
Hello,

On Wed, 24 Jul 2019 14:33:27 +0200
Jehan-Guillaume de Rorthais  wrote:

> On Wed, 24 Jul 2019 09:49:05 +0900
> Michael Paquier  wrote:
> 
> > On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais
> > wrote:  
[...]
> > I think that there are arguments for being more flexible with it, and
> > perhaps have a system-level view to be able to look at some of its fields.  
> 
> Great idea. I'll give it a try to keep the discussion on.

After some thinking, I did not find enough data to expose to justify the
creation a system-level view. As I just need the current timeline I
wrote "pg_current_timeline()". Please, find the patch in attachment.

The current behavior is quite simple: 
* if the cluster is in production, return ThisTimeLineID
* else return walrcv->receivedTLI (using GetWalRcvWriteRecPtr)

This is really naive implementation. We should probably add some code around
the startup process to gather and share general recovery stats. This would
allow to fetch eg. the current recovery method, latest xlog file name restored
from archives or streaming, its timeline, etc.

Any thoughts?

Regards,
>From 5b06d83e000132eca5a3173e96651ddf4531cff6 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Thu, 25 Jul 2019 19:36:40 +0200
Subject: [PATCH] Add function pg_current_timeline

---
 src/backend/access/transam/xlog.c  | 26 ++
 src/backend/access/transam/xlogfuncs.c | 17 +
 src/include/access/xlog.h  |  1 +
 src/include/catalog/pg_proc.dat|  6 ++
 4 files changed, 50 insertions(+)

diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index da3d250986..9da876c0ac 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -12243,3 +12243,29 @@ XLogRequestWalReceiverReply(void)
 {
 	doRequestWalReceiverReply = true;
 }
+
+/*
+ * Returns current active timeline.
+ * During production, returns ThisTimeLineID.
+ * During standby, returns the timeline of the latest record flushed to XLOG.
+ */
+TimeLineID
+GetCurrentTimeLine(void)
+{
+	TimeLineID	localTimeLineID;
+	bool		localRecoveryInProgress;
+
+	SpinLockAcquire(>info_lck);
+
+	localTimeLineID = XLogCtl->ThisTimeLineID;
+	localRecoveryInProgress = XLogCtl->SharedRecoveryInProgress;
+
+	SpinLockRelease(>info_lck);
+
+	if (localRecoveryInProgress) {
+		 GetWalRcvWriteRecPtr(NULL, );
+		 return localTimeLineID;
+	}
+
+	return localTimeLineID;
+}
diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index b35043bf71..c1cb9e8819 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -776,3 +776,20 @@ pg_promote(PG_FUNCTION_ARGS)
 			(errmsg("server did not promote within %d seconds", wait_seconds)));
 	PG_RETURN_BOOL(false);
 }
+
+/*
+ * Returns the current timeline
+ */
+Datum
+pg_current_timeline(PG_FUNCTION_ARGS)
+{
+	TimeLineID currentTL = GetCurrentTimeLine();
+
+	/*
+	 * we have no information about the timeline if the walreceiver
+	 * is disabled or hasn't streamed anything yet,
+	 */
+	if (!currentTL) PG_RETURN_NULL();
+
+	PG_RETURN_INT32(currentTL);
+}
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index d519252aad..f0502c0b41 100644
--- a/src/include/access/xlog.h
+++ b/src/include/access/xlog.h
@@ -313,6 +313,7 @@ extern XLogRecPtr GetInsertRecPtr(void);
 extern XLogRecPtr GetFlushRecPtr(void);
 extern XLogRecPtr GetLastImportantRecPtr(void);
 extern void RemovePromoteSignalFiles(void);
+extern TimeLineID GetCurrentTimeLine(void);
 
 extern bool CheckPromoteSignal(void);
 extern void WakeupRecovery(void);
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 0902dce5f1..42cd7c3486 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -6006,6 +6006,12 @@
 { oid => '2851', descr => 'wal filename, given a wal location',
   proname => 'pg_walfile_name', prorettype => 'text', proargtypes => 'pg_lsn',
   prosrc => 'pg_walfile_name' },
+{ oid => '3434',
+  descr => 'return the current timeline',
+  proname => 'pg_current_timeline', prorettype => 'int4',
+  proargtypes => '', proallargtypes => '{int4}',
+  proargmodes => '{o}', proargnames => '{timeline}',
+  prosrc => 'pg_current_timeline' },
 
 { oid => '3165', descr => 'difference in bytes, given two wal locations',
   proname => 'pg_wal_lsn_diff', prorettype => 'numeric',
-- 
2.20.1



Re: Fetching timeline during recovery

2019-07-24 Thread Jehan-Guillaume de Rorthais
Hello Michael,

On Wed, 24 Jul 2019 09:49:05 +0900
Michael Paquier  wrote:

> On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais wrote:
> > Please, find in attachment a first trivial patch to support
> > pg_walfile_name() and pg_walfile_name_offset() on a standby.
> > Previous restriction on this functions seems related to ThisTimeLineID not
> > being safe on standby. This patch is fetching the timeline from
> > WalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,
> > this is updated each time some data are flushed to the WAL.  
[...]
> Your patch does not count for the case of archive recovery, where
> there is no WAL receiver, and as the shared memory state of the WAL
> receiver is not set 0 would be set.

Indeed. I tested this topic with the following query and was fine with the
NULL result:

  select pg_walfile_name(pg_last_wal_receive_lsn());

I was fine with this result because my use case requires replication anyway. A
NULL result would mean that the node never streamed from the old primary since
its last startup, so a failover should ignore it anyway.

However, NULL just comes from pg_last_wal_receive_lsn() here. The following
query result is wrong:

  > select pg_walfile_name('0/1')
  

I fixed that. See patch 0001-v2-* in attachement


> The replay timeline is something we could use here instead via
> GetXLogReplayRecPtr(). CreateRestartPoint actually takes the latest WAL
> receiver or replayed point for its end LSN position, whichever is newer.

I did consider GetXLogReplayRecPtr() or even XLogCtl->replayEndTLI (which is
updated right before the replay). However, both depend on read activity on the
standby. That's why I picked WalRcv->receivedTLI which is updated whatever the
reading activity on the standby.

> > Last, I plan to produce an extension to support this on older release. Is
> > it something that could be integrated in official source tree during a minor
> > release or should I publish it on eg. pgxn?  
> 
> Unfortunately no.  This is a behavior change so it cannot find its way
> into back branches.

Yes, my patch is a behavior change. But here, I was yalking about an
extension, not the core itself, to support this feature in older releases.

> The WAL receiver state is in shared memory and published, so that's easy
> enough to get.  We don't do that for XLogCtl unfortunately.

Both are in shared memory, but WalRcv have a public function to get its
receivedTLI member.

XLogCtl has nothing in public to expose its ThisTimeLineID member. However, from
a module, I'm able to fetch it using:

  XLogCtl = ShmemInitStruct("XLOG Ctl", XLOGShmemSize(), );
  SpinLockAcquire(>info_lck);
  tl = XLogCtl->ThisTimeLineID;
  SpinLockRelease(>info_lck);

As the "XLOG Ctl" index entry already exists in shmem, ShmemInitStruct returns
the correct structure from there. Not sure this was supposed to be used this
way though...Adding a public function might be cleaner, but it will not help
for older releases.

> I think that there are arguments for being more flexible with it, and perhaps
> have a system-level view to be able to look at some of its fields.

Great idea. I'll give it a try to keep the discussion on.

> There is also a downside with get_controlfile(), which is that it
> fetches directly the data from the on-disk pg_control, and
> post-recovery this only gets updated at the first checkpoint.

Indeed, that's why I started this patch and thread.

Thanks,
>From fdf133645b8cc2728cca3677e71bdd5cb69cdbd4 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Tue, 23 Jul 2019 17:28:44 +0200
Subject: [PATCH] Support pg_walfile_name on standby

Support executing both SQL functions pg_walfile_name() and
pg_walfile_name_offset() on a standby.
---
 src/backend/access/transam/xlogfuncs.c | 39 +-
 1 file changed, 25 insertions(+), 14 deletions(-)

diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index b35043bf71..86c4d8382b 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -460,13 +460,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 	TupleDesc	resultTupleDesc;
 	HeapTuple	resultHeapTuple;
 	Datum		result;
-
-	if (RecoveryInProgress())
-		ereport(ERROR,
-(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("recovery is in progress"),
- errhint("%s cannot be executed during recovery.",
-		 "pg_walfile_name_offset()")));
+	TimeLineID  tl;
 
 	/*
 	 * Construct a tuple descriptor for the result row.  This must match this
@@ -480,11 +474,24 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 
 	resultTupleDesc = BlessTupleDesc(resultTupleDesc);
 
+	if (RecoveryInProgress())
+	{
+		GetWalRcvWriteRecPtr(NULL, );
+
+		if (!tl)
+		{
+			isnull[0] = isnull[1] = true;
+			goto result;
+		}
+	}
+	else
+		tl = ThisTimeLineID;
+
 	/*
 	 * xlogfilename
 	 */
 	XLByteToPrevSeg(locationpoint, xlogsegno, 

Re: Fetching timeline during recovery

2019-07-23 Thread Michael Paquier
On Tue, Jul 23, 2019 at 06:05:18PM +0200, Jehan-Guillaume de Rorthais wrote:
> Please, find in attachment a first trivial patch to support pg_walfile_name()
> and pg_walfile_name_offset() on a standby.
> Previous restriction on this functions seems related to ThisTimeLineID not
> being safe on standby. This patch is fetching the timeline from
> WalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,
> this is updated each time some data are flushed to the WAL.

FWIW, I don't have any objections to lift a bit the restrictions on
those functions if we can make that reliable enough.  Now during
recovery you cannot rely on ThisTimeLineID as you say, per mostly the
following bit in xlog.c (the comment block a little bit up also has
explanations):
   /*
* ThisTimeLineID is normally not set when we're still in recovery.
* However, recycling/preallocating segments above needed ThisTimeLineID
* to determine which timeline to install the segments on. Reset it now,
* to restore the normal state of affairs for debugging purposes.
*/
if (RecoveryInProgress())
ThisTimeLineID = 0;

Your patch does not count for the case of archive recovery, where
there is no WAL receiver, and as the shared memory state of the WAL
receiver is not set 0 would be set.  The replay timeline is something
we could use here instead via GetXLogReplayRecPtr().
CreateRestartPoint actually takes the latest WAL receiver or replayed
point for its end LSN position, whichever is newer.

> Last, I plan to produce an extension to support this on older release. Is
> it something that could be integrated in official source tree during a minor
> release or should I publish it on eg. pgxn?

Unfortunately no.  This is a behavior change so it cannot find its way
into back branches.  The WAL receiver state is in shared memory and
published, so that's easy enough to get.  We don't do that for XLogCtl
unfortunately.  I think that there are arguments for being more
flexible with it, and perhaps have a system-level view to be able to
look at some of its fields.

There is also a downside with get_controlfile(), which is that it
fetches directly the data from the on-disk pg_control, and
post-recovery this only gets updated at the first checkpoint.
--
Michael


signature.asc
Description: PGP signature


Re: Fetching timeline during recovery

2019-07-23 Thread Jehan-Guillaume de Rorthais
On Tue, 23 Jul 2019 16:00:29 -0400
David Steele  wrote:

> On 7/23/19 2:59 PM, Andrey Borodin wrote:
> >   
> >> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais 
> >> написал(а):
> >>
> >> Fetching the timeline from a standby could be useful in various situation.
> >> Either for backup tools [1] or failover tools during some kind of election
> >> process.  
> > That backup tool is reading timeline from pg_control_checkpoint(). And
> > formats WAL file name itself when necessary.  
> 
> We do the same [1].

Thank you both for your comments.

OK, so backup tools are fine with reading slightly outdated data from
controldata file.

Anyway, my use case is mostly about auto failover. During election, I currently
have to force a checkpoint on standbys to get their real timeline from the
controldata.

However, the forced checkpoint could be very long[1] (considering auto
failover). I need to be able to compare TL without all the burden of a
CHECKPOINT just for this.

As I wrote, my favorite solution would be a function returning BOTH
current TL and LSN at the same time. I'll send a patch tomorrow to the list
and I'll bikeshedding later depending on the feedback.

In the meantime, previous patch might still be useful for some other purpose.
Comments are welcomed.

Thanks,

[1] this exact use case is actually hiding behind this thread:
https://www.postgresql.org/message-id/flat/CAEkBuzeno6ztiM1g4WdzKRJFgL8b2nfePNU%3Dq3sBiEZUm-D-sQ%40mail.gmail.com




Re: Fetching timeline during recovery

2019-07-23 Thread David Steele
On 7/23/19 2:59 PM, Andrey Borodin wrote:
> 
>> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais  
>> написал(а):
>>
>> Fetching the timeline from a standby could be useful in various situation.
>> Either for backup tools [1] or failover tools during some kind of election
>> process.
> That backup tool is reading timeline from pg_control_checkpoint(). And 
> formats WAL file name itself when necessary.

We do the same [1].

>> Please, find in attachment a first trivial patch to support pg_walfile_name()
>> and pg_walfile_name_offset() on a standby.
> 
> You just cannot format WAL file name for LSN when timeline changed. Because 
> there are at least three WALs for that point: previous, new and partial. 
> However, reading TLI from checkpoint seems safe for backup purposes.
> The only reason for WAL-G to read that timeline is to mark backup invalid: if 
> it's name is base_0001YY0YY and timeline change happens, it 
> should be named base_0002YY0YY (consistency point is not on 
> TLI 2), but WAL-G cannot rename backup during backup-push.

Naming considerations aside, I don't think that a timeline switch during
a standby backup is a good idea, mostly because it is (currently) not
tested.  We don't allow it in pgBackRest.

[1]
https://github.com/pgbackrest/pgbackrest/blob/release/2.15.1/lib/pgBackRest/Db.pm#L1008

-- 
-David
da...@pgmasters.net




Re: Fetching timeline during recovery

2019-07-23 Thread Andrey Borodin



> 23 июля 2019 г., в 21:05, Jehan-Guillaume de Rorthais  
> написал(а):
> 
> Fetching the timeline from a standby could be useful in various situation.
> Either for backup tools [1] or failover tools during some kind of election
> process.
That backup tool is reading timeline from pg_control_checkpoint(). And formats 
WAL file name itself when necessary.

> Please, find in attachment a first trivial patch to support pg_walfile_name()
> and pg_walfile_name_offset() on a standby.

You just cannot format WAL file name for LSN when timeline changed. Because 
there are at least three WALs for that point: previous, new and partial. 
However, reading TLI from checkpoint seems safe for backup purposes.
The only reason for WAL-G to read that timeline is to mark backup invalid: if 
it's name is base_0001YY0YY and timeline change happens, it 
should be named base_0002YY0YY (consistency point is not on TLI 
2), but WAL-G cannot rename backup during backup-push.

Hope this information is useful. Thanks!

Best regards, Andrey Borodin.

[0] https://github.com/wal-g/wal-g/blob/master/internal/timeline.go#L39



Fetching timeline during recovery

2019-07-23 Thread Jehan-Guillaume de Rorthais
Hello,

Fetching the timeline from a standby could be useful in various situation.
Either for backup tools [1] or failover tools during some kind of election
process.

Please, find in attachment a first trivial patch to support pg_walfile_name()
and pg_walfile_name_offset() on a standby.

Previous restriction on this functions seems related to ThisTimeLineID not
being safe on standby. This patch is fetching the timeline from
WalRcv->receivedTLI using GetWalRcvWriteRecPtr(). As far as I understand,
this is updated each time some data are flushed to the WAL. 

As the SQL function pg_last_wal_receive_lsn() reads WalRcv->receivedUpto
which is flushed in the same time, any tool relying on these functions should be
quite fine. It will just have to parse the TL from the walfile name.

It doesn't seems perfectly sain though. I suspect a race condition in any SQL
statement that would try to get the LSN and the walfile name in the same time
if the timeline changes in the meantime. Ideally, a function should be able to
return both LSN and TL in the same time, with only one read from WalRcv. I'm not
sure if I should change the result from pg_last_wal_receive_lsn() or add a
brand new admin function. Any advice?

Last, I plan to produce an extension to support this on older release. Is
it something that could be integrated in official source tree during a minor
release or should I publish it on eg. pgxn?

Regards,

[1]
https://www.postgresql.org/message-id/flat/BF2AD4A8-E7F5-486F-92C8-A6959040DEB6%40yandex-team.ru
>From 9d0fb73d03c6e7e06f2f8be62abab4e54cf01117 Mon Sep 17 00:00:00 2001
From: Jehan-Guillaume de Rorthais 
Date: Tue, 23 Jul 2019 17:28:44 +0200
Subject: [PATCH] Support pg_walfile_name on standby

Support executing both SQL functions pg_walfile_name() and
pg_walfile_name_offset() on a standby.
---
 src/backend/access/transam/xlogfuncs.c | 22 ++
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/src/backend/access/transam/xlogfuncs.c b/src/backend/access/transam/xlogfuncs.c
index b35043bf71..a8184a20c4 100644
--- a/src/backend/access/transam/xlogfuncs.c
+++ b/src/backend/access/transam/xlogfuncs.c
@@ -460,13 +460,12 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 	TupleDesc	resultTupleDesc;
 	HeapTuple	resultHeapTuple;
 	Datum		result;
+	TimeLineID  tl;
 
 	if (RecoveryInProgress())
-		ereport(ERROR,
-(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("recovery is in progress"),
- errhint("%s cannot be executed during recovery.",
-		 "pg_walfile_name_offset()")));
+		GetWalRcvWriteRecPtr(NULL, );
+	else
+		tl = ThisTimeLineID;
 
 	/*
 	 * Construct a tuple descriptor for the result row.  This must match this
@@ -484,7 +483,7 @@ pg_walfile_name_offset(PG_FUNCTION_ARGS)
 	 * xlogfilename
 	 */
 	XLByteToPrevSeg(locationpoint, xlogsegno, wal_segment_size);
-	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_segment_size);
+	XLogFileName(xlogfilename, tl, xlogsegno, wal_segment_size);
 
 	values[0] = CStringGetTextDatum(xlogfilename);
 	isnull[0] = false;
@@ -517,16 +516,15 @@ pg_walfile_name(PG_FUNCTION_ARGS)
 	XLogSegNo	xlogsegno;
 	XLogRecPtr	locationpoint = PG_GETARG_LSN(0);
 	char		xlogfilename[MAXFNAMELEN];
+	TimeLineID  tl;
 
 	if (RecoveryInProgress())
-		ereport(ERROR,
-(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg("recovery is in progress"),
- errhint("%s cannot be executed during recovery.",
-		 "pg_walfile_name()")));
+		GetWalRcvWriteRecPtr(NULL, );
+	else
+		tl = ThisTimeLineID;
 
 	XLByteToPrevSeg(locationpoint, xlogsegno, wal_segment_size);
-	XLogFileName(xlogfilename, ThisTimeLineID, xlogsegno, wal_segment_size);
+	XLogFileName(xlogfilename, tl, xlogsegno, wal_segment_size);
 
 	PG_RETURN_TEXT_P(cstring_to_text(xlogfilename));
 }
-- 
2.20.1