Oops! The previous patch is forgetting the default case and crashes.
At Wed, 08 Nov 2017 13:14:31 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20171108.131431.170534842.horiguchi.kyot...@lab.ntt.co.jp>
> > I don't think 'distance' is a good metric - that's going to continually
> > c
Hello,
At Mon, 6 Nov 2017 05:20:50 -0800, Andres Freund wrote in
<20171106132050.6apzynxrqrzgh...@alap3.anarazel.de>
> Hi,
>
> On 2017-10-31 18:43:10 +0900, Kyotaro HORIGUCHI wrote:
> > - distance:
> > how many bytes LSN can advance before the margin defined by
> > max_slot_wal_keep_s
Hi,
On 2017-10-31 18:43:10 +0900, Kyotaro HORIGUCHI wrote:
> - distance:
> how many bytes LSN can advance before the margin defined by
> max_slot_wal_keep_size (and wal_keep_segments) is exhasuted,
> or how many bytes this slot have lost xlog from restart_lsn.
I don't think 'distanc
On 2017-11-06 11:07:04 +0800, Craig Ringer wrote:
> Would it make sense to teach xlogreader how to fetch from WAL archive,
> too? That way if there's an archive, slots could continue to be used
> even after we purge from local pg_xlog, albeit at a performance cost.
>
> I'm thinking of this mainly
On 31 October 2017 at 17:43, Kyotaro HORIGUCHI
wrote:
> Hello, this is a rebased version.
>
> It gets a change of the meaning of monitoring value along with
> rebasing.
>
> In previous version, the "live" column mysteriously predicts the
> necessary segments will be kept or lost by the next checkp
On Tue, Oct 31, 2017 at 10:43 PM, Kyotaro HORIGUCHI
wrote:
> Hello, this is a rebased version.
Hello Horiguchi-san,
I think the "ddl" test under contrib/test_decoding also needs to be
updated because it looks at pg_replication_slots and doesn't expect
your new columns.
--
Thomas Munro
http://w
Hello, this is a rebased version.
It gets a change of the meaning of monitoring value along with
rebasing.
In previous version, the "live" column mysteriously predicts the
necessary segments will be kept or lost by the next checkpoint
and the "distance" offered a still more mysterious value.
In
At Wed, 13 Sep 2017 11:43:06 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20170913.114306.67844218.horiguchi.kyot...@lab.ntt.co.jp>
horiguchi.kyotaro> At Thu, 07 Sep 2017 21:59:56 +0900 (Tokyo Standard Time),
Kyotaro HORIGUCHI wrote in
<20170907.215956.110216588.horiguchi.kyot...@
At Thu, 07 Sep 2017 21:59:56 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20170907.215956.110216588.horiguchi.kyot...@lab.ntt.co.jp>
> Hello,
>
> At Thu, 07 Sep 2017 14:12:12 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
> wrote in
> <20170907.141212.227032666.horiguchi.kyot...@
Hello,
At Thu, 07 Sep 2017 14:12:12 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20170907.141212.227032666.horiguchi.kyot...@lab.ntt.co.jp>
> > I would like a flag in pg_replication_slots, and possibly also a
> > numerical column that indicates how far away from the critical point
>
Hello,
At Fri, 1 Sep 2017 23:49:21 -0400, Peter Eisentraut
wrote in
<751e09c4-93e0-de57-edd2-e64c4950f...@2ndquadrant.com>
> I'm still concerned about how the critical situation is handled. Your
> patch just prints a warning to the log and then goes on -- doing what?
>
> The warning rolls off
I'm still concerned about how the critical situation is handled. Your
patch just prints a warning to the log and then goes on -- doing what?
The warning rolls off the log, and then you have no idea what happened,
or how to recover.
I would like a flag in pg_replication_slots, and possibly also a
Hello,
I'll add this to CF2017-09.
At Mon, 06 Mar 2017 18:20:06 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20170306.182006.172683338.horiguchi.kyot...@lab.ntt.co.jp>
> Thank you for the comment.
>
> At Fri, 3 Mar 2017 14:47:20 -0500, Peter Eisentraut
> wrote in
>
> > On 3/1/
On 28 February 2017 at 12:27, Petr Jelinek wrote:
>> This patch adds a GUC to put a limit to the number of segments
>> that replication slots can keep. Hitting the limit during
>> checkpoint shows a warining and the segments older than the limit
>> are removed.
>>
>>> WARNING: restart LSN of rep
Thank you for the comment.
At Fri, 3 Mar 2017 14:47:20 -0500, Peter Eisentraut
wrote in
> On 3/1/17 19:54, Kyotaro HORIGUCHI wrote:
> >> Please measure it in size, not in number of segments.
> > It was difficult to dicide which is reaaonable but I named it
> > after wal_keep_segments because i
On 3/1/17 19:54, Kyotaro HORIGUCHI wrote:
>> Please measure it in size, not in number of segments.
> It was difficult to dicide which is reaaonable but I named it
> after wal_keep_segments because it has the similar effect.
>
> In bytes(or LSN)
> max_wal_size
> min_wal_size
> wal_write_flush_af
At Wed, 1 Mar 2017 12:18:07 -0500, Peter Eisentraut
wrote in
<98538b00-42ae-6a6b-f852-50b3c937a...@2ndquadrant.com>
> On 2/27/17 22:27, Kyotaro HORIGUCHI wrote:
> > This patch adds a GUC to put a limit to the number of segments
> > that replication slots can keep.
>
> Please measure it in size,
At Wed, 1 Mar 2017 12:17:43 -0500, Peter Eisentraut
wrote in
> On 2/27/17 23:27, Petr Jelinek wrote:
> >>> WARNING: restart LSN of replication slots is ignored by checkpoint
> >>> DETAIL: Some replication slots lose required WAL segnents to continue.
> > However this is dangerous as logical r
At Wed, 1 Mar 2017 08:06:10 -0800, Andres Freund wrote in
<20170301160610.wc7ez3vihmial...@alap3.anarazel.de>
> On 2017-02-28 12:42:32 +0900, Michael Paquier wrote:
> > Please no. Replication slots are designed the current way because we
> > don't want to have to use something like wal_keep_segme
On 2/27/17 23:27, Petr Jelinek wrote:
>>> WARNING: restart LSN of replication slots is ignored by checkpoint
>>> DETAIL: Some replication slots lose required WAL segnents to continue.
> However this is dangerous as logical replication slot does not consider
> it error when too old LSN is requeste
On 2/27/17 22:27, Kyotaro HORIGUCHI wrote:
> This patch adds a GUC to put a limit to the number of segments
> that replication slots can keep.
Please measure it in size, not in number of segments.
--
Peter Eisentraut http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support,
Hi,
On 2017-02-28 12:42:32 +0900, Michael Paquier wrote:
> Please no. Replication slots are designed the current way because we
> don't want to have to use something like wal_keep_segments as it is a
> wart, and this applies as well for replication slots in my opinion.
I think a per-slot option t
On Tue, Feb 28, 2017 at 10:04 AM, Michael Paquier
wrote:
> It would make more sense to just switch max_wal_size from a soft to a
> hard limit. The current behavior is not cool with activity spikes.
Having a hard limit on WAL size would be nice, but that's a different
problem from the one being di
On Tue, Feb 28, 2017 at 1:16 PM, Kyotaro HORIGUCHI
wrote:
> It is doable without a plugin and currently we are planning to do
> in the way (Maybe such plugin would be unacceptable..). Killing
> walsender (which one?), removing the slot and if failed..
The PID and restart_lsn associated to each sl
On 28/02/17 04:27, Kyotaro HORIGUCHI wrote:
> Hello.
>
> Although replication slot is helpful to avoid unwanted WAL
> deletion, on the other hand it can cause a disastrous situation
> by keeping WAL segments without a limit. Removing the causal
> repslot will save this situation but it is not doab
Thank you for the opinion.
At Tue, 28 Feb 2017 12:42:32 +0900, Michael Paquier
wrote in
> Please no. Replication slots are designed the current way because we
> don't want to have to use something like wal_keep_segments as it is a
> wart, and this applies as well for replication slots in my opi
On Tue, Feb 28, 2017 at 12:27 PM, Kyotaro HORIGUCHI
wrote:
> Although replication slot is helpful to avoid unwanted WAL
> deletion, on the other hand it can cause a disastrous situation
> by keeping WAL segments without a limit. Removing the causal
> repslot will save this situation but it is not
Hello.
Although replication slot is helpful to avoid unwanted WAL
deletion, on the other hand it can cause a disastrous situation
by keeping WAL segments without a limit. Removing the causal
repslot will save this situation but it is not doable if the
standby is active. We should do a rather compl
28 matches
Mail list logo