On 2021-Sep-01, Michael Paquier wrote:
> That's about 515e3d8, right?
Yes.
> I have not looked in details at what you have here, but this produces
> a compilation warning on Windows for me with this part of the patch:
This seems a tiny speck in a sea of bogosity. If you want to silence
the
On Tue, Aug 31, 2021 at 08:52:05PM -0400, alvhe...@alvh.no-ip.org wrote:
> Yeah, that's becoming my conclusion too -- undo that, and start from
> scratch using the other idea.
That's about 515e3d8, right? I have not looked in details at what you
have here, but this produces a compilation warning
On 2021-Aug-31, Andres Freund wrote:
> Maybe, but this is getting uglier and uglier.
>
> I think patch should be reverted. It's not in a state that's appropriate for
> the backbranches.
Yeah, that's becoming my conclusion too -- undo that, and start from
scratch using the other idea.
--
On 2021-08-31 23:31:15 +, Bossart, Nathan wrote:
> On 8/31/21, 1:30 PM, "Andres Freund" wrote:
> > On 2021-08-31 18:09:36 +, Bossart, Nathan wrote:
> >> What appears to happen in this case is that bgwriter eventually creates a
> >> xl_running_xacts record and nudges walwriter to flush it
Hi,
On 2021-08-31 18:09:36 +, Bossart, Nathan wrote:
> On 8/31/21, 10:21 AM, "Andres Freund" wrote:
> > What would trigger the flushing? We don't write out partially filled pages
> > unless
> > a) we're explicitly flushing an LSN on the partial page (e.g. because a
> >synchronous commit
On 8/31/21, 10:21 AM, "Andres Freund" wrote:
> What would trigger the flushing? We don't write out partially filled pages
> unless
> a) we're explicitly flushing an LSN on the partial page (e.g. because a
>synchronous commit record resides on it)
> b) there's an async commit (i.e. commit with
Hi,
On 2021-08-31 17:01:31 +, Bossart, Nathan wrote:
> > If the record ending at s4 + 10 isn't an async commit (and thus
> > XLogCtl->asyncXactLSN is smaller), and there are no further records, we can
> > end up waiting effectively forever for s2 (and s3) to be archived. If all
> > other
On 8/31/21, 12:44 AM, "Andres Freund" wrote:
> If there's now a flush request including all of s3, we'll have the following
> sequence of notifies:
>
> NotifySegmentsReadyForArchive(s1)
> nothing happens, smaller than s1+10
>
> NotifySegmentsReadyForArchive(s2)
> earliestSegBoundary = s4
>
Hi
On 2021-08-31 06:45:06 +, Bossart, Nathan wrote:
> On 8/30/21, 7:39 PM, "Andres Freund" wrote:
> > On 2021-08-30 22:39:04 +, Bossart, Nathan wrote:
> >> If we called NotifySegmentsReadyForArchive() before we updated the
> >> flush location in shared memory, we might skip nudging the
Hi,
On 2021-08-30 22:39:04 +, Bossart, Nathan wrote:
> On 8/30/21, 2:06 PM, "Andres Freund" wrote:
> > When were you thinking of doing the latch sets? Adding a latch set for every
> > XLogWrite() wouldn't be cheap either. Both because syscalls under a lock
> > aren't free and because waking
On 8/30/21, 3:40 PM, "Bossart, Nathan" wrote:
> On 8/30/21, 2:06 PM, "Andres Freund" wrote:
>> Although, the more I think about, the more I am confused about the trailing
>> if (XLogArchivingActive())
>> NotifySegmentsReadyForArchive(LogwrtResult.Flush);
>>
>> in
On 8/30/21, 2:06 PM, "Andres Freund" wrote:
> When were you thinking of doing the latch sets? Adding a latch set for every
> XLogWrite() wouldn't be cheap either. Both because syscalls under a lock
> aren't free and because waking up walsender even more often isn't free (we
> already have a few
Hi,
On 2021-08-30 15:51:54 -0400, alvhe...@alvh.no-ip.org wrote:
> On 2021-Aug-28, Andres Freund wrote:
>
> > While rebasing the aio patchset ontop of HEAD I noticed that this commit
> > added
> > another atomic operation to XLogWrite() with archiving enabled. The WAL
> > write
> > path is
On 8/30/21, 12:52 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-28, Andres Freund wrote:
>
>> While rebasing the aio patchset ontop of HEAD I noticed that this commit
>> added
>> another atomic operation to XLogWrite() with archiving enabled. The WAL write
>> path is really quite hot, and
On 2021-Aug-28, Andres Freund wrote:
> While rebasing the aio patchset ontop of HEAD I noticed that this commit added
> another atomic operation to XLogWrite() with archiving enabled. The WAL write
> path is really quite hot, and at least some of the
> NotifySegmentsReadyForArchive() calls are
Hi,
On 2021-08-23 15:55:03 -0400, alvhe...@alvh.no-ip.org wrote:
> On 2021-Aug-23, Bossart, Nathan wrote:
>
> > Ah, okay. BTW the other changes you mentioned made sense to me.
>
> Thanks. I've pushed this now to all live branches.
While rebasing the aio patchset ontop of HEAD I noticed that
On 8/25/21, 11:01 AM, "Fujii Masao" wrote:
> If LogwrtResult.Flush >= EndPos, which means that another process already
> has flushed the record concurrently and updated XLogCtl->LogwrtResult.Flush.
> This situation also means that that another process called
>
On 2021/08/24 4:55, alvhe...@alvh.no-ip.org wrote:
On 2021-Aug-23, Bossart, Nathan wrote:
Ah, okay. BTW the other changes you mentioned made sense to me.
Thanks. I've pushed this now to all live branches.
Thanks a lot!
+ /*
+* There's a chance that the
On 2021-Aug-23, Bossart, Nathan wrote:
> On 8/23/21, 12:55 PM, "alvhe...@alvh.no-ip.org"
> wrote:
> > Thanks. I've pushed this now to all live branches.
>
> Thank you! I appreciate the thorough reviews. Should we make a new
> thread for the streaming replication fix?
Yeah, this one is long
On 8/23/21, 12:55 PM, "alvhe...@alvh.no-ip.org" wrote:
> Thanks. I've pushed this now to all live branches.
Thank you! I appreciate the thorough reviews. Should we make a new
thread for the streaming replication fix?
Nathan
On 2021-Aug-23, Bossart, Nathan wrote:
> Ah, okay. BTW the other changes you mentioned made sense to me.
Thanks. I've pushed this now to all live branches.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
It does it in a really, really complicated way
why
On 8/23/21, 10:31 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-23, alvhe...@alvh.no-ip.org wrote:
>
>> The only way .ready files are created is that XLogNotifyWrite() is
>> called. For regular WAL files during regular operation, that only
>> happens in XLogNotifyWriteSeg(). That, in turn,
On 2021-Aug-23, alvhe...@alvh.no-ip.org wrote:
> The only way .ready files are created is that XLogNotifyWrite() is
> called. For regular WAL files during regular operation, that only
> happens in XLogNotifyWriteSeg(). That, in turn, only happens in
> NotifySegmentsReadyForArchive(). But if
On 2021-Aug-23, Bossart, Nathan wrote:
> Sorry, I'm still not following this one. If we skipped creating
> .ready segments due to a crash, we rely on RemoveOldXlogFiles() to
> create them as needed in the end-of-recovery checkpoint. If a record
> fits perfectly in the end of a segment, we'll
On 8/23/21, 9:33 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-23, Bossart, Nathan wrote:
>
>> Hm. My expectation would be that if there are no registrations, we
>> cannot create .ready files for the flushed segments. The scenario
>> where I can see that happening is when a record gets
On 2021-Aug-23, Bossart, Nathan wrote:
> Hm. My expectation would be that if there are no registrations, we
> cannot create .ready files for the flushed segments. The scenario
> where I can see that happening is when a record gets flushed to disk
> prior to registration. In that case, we'll
On 8/23/21, 8:50 AM, "alvhe...@alvh.no-ip.org" wrote:
> ... while reading the resulting code after backpatching to all branches,
> I realized that if there are no registrations whatsoever, then archiving
> won't do anything, which surely is the wrong thing to do. The correct
> behavior should be
On 2021-Aug-21, Bossart, Nathan wrote:
> > Well, the thing I realized is that these three helper functions have
> > exactly one caller each. I think the compiler is going to inline them,
> > so there isn't going to be a function call in the assembly. I haven't
> > verified this, though.
>
>
On 8/20/21, 4:52 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-20, Bossart, Nathan wrote:
>
>> I was looking at moving the function calls out of the spinlock region.
>> I don't think the functions are doing anything too expensive, and they
>> help clean up NotifySegmentsReadyForArchive()
On 2021-Aug-20, Bossart, Nathan wrote:
> I was looking at moving the function calls out of the spinlock region.
> I don't think the functions are doing anything too expensive, and they
> help clean up NotifySegmentsReadyForArchive() quite a bit, but I
> understand why it might be against project
On 8/20/21, 4:00 PM, "alvhe...@alvh.no-ip.org" wrote:
> Attached is v14 which uses a separate spinlock.
This looks good to me.
I was looking at moving the function calls out of the spinlock region.
I don't think the functions are doing anything too expensive, and they
help clean up
Attached is v14 which uses a separate spinlock.
--
Álvaro Herrera 39°49'30"S 73°17'W — https://www.EnterpriseDB.com/
"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,
no me acuerdo." (Augusto Pinochet a una corte de justicia)
>From
On 2021-Aug-20, Bossart, Nathan wrote:
> > On Fri, Aug 20, 2021 at 1:29 PM Bossart, Nathan wrote:
> >> This led me to revisit the two-element
> >> approach that was discussed upthread. What if we only stored the
> >> earliest and latest segment boundaries at any given time? Once the
> >>
On Fri, Aug 20, 2021 at 1:52 PM alvhe...@alvh.no-ip.org
wrote:
> Actually, you were right. Hash tables in shared memory can be expanded.
> There are some limitations (the hash "directory" is fixed size, which
> means the hash table get less efficient if it grows too much), but you
> can
On Fri, Aug 20, 2021 at 1:29 PM Bossart, Nathan wrote:
> Thinking about this stuff further, I was wondering if one way to
> handle the bounded shared hash table problem would be to replace the
> latest boundary in the map whenever it was full. But at that point,
> do we even need a hash table?
On 2021-Aug-20, Bossart, Nathan wrote:
> On 8/20/21, 8:29 AM, "Robert Haas" wrote:
> > We can't expand the hash table either. It has an initial and maximum
> > size of 16 elements, which means it's basically an expensive array,
> > and which also means that it imposes a new limit of 16 *
> >
On 8/20/21, 10:08 AM, "Robert Haas" wrote:
> On Fri, Aug 20, 2021 at 12:36 PM Bossart, Nathan wrote:
>> If a record spans multiple segments, we only register one segment
>> boundary. For example, if I insert a record that starts at segment
>> number 1 and stops at 10, I'll insert one segment
On Fri, Aug 20, 2021 at 12:36 PM Bossart, Nathan wrote:
> If a record spans multiple segments, we only register one segment
> boundary. For example, if I insert a record that starts at segment
> number 1 and stops at 10, I'll insert one segment boundary for segment
> 10. We'll only create
On 8/20/21, 8:29 AM, "Robert Haas" wrote:
> On Fri, Aug 20, 2021 at 10:50 AM alvhe...@alvh.no-ip.org
> wrote:
>> 1. We use a hash table in shared memory. That's great. The part that's
>>not so great is that in both places where we read items from it, we
>>have to iterate in some way.
On Fri, Aug 20, 2021 at 10:50 AM alvhe...@alvh.no-ip.org
wrote:
> 1. We use a hash table in shared memory. That's great. The part that's
>not so great is that in both places where we read items from it, we
>have to iterate in some way. This seems a bit silly. An array would
>serve
Two things.
1. We use a hash table in shared memory. That's great. The part that's
not so great is that in both places where we read items from it, we
have to iterate in some way. This seems a bit silly. An array would
serve us better, if only we could expand it as needed. However,
In v12 I moved the code around a bit and reworded some comments.
--
Álvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/
>From 7d1475578431e265a5e7f8b94a6b0791b68a9a31 Mon Sep 17 00:00:00 2001
From: Nathan Bossart
Date: Tue, 17 Aug 2021 03:52:43 +
Subject: [PATCH
On 2021-Aug-18, Bossart, Nathan wrote:
> As long as XLogBackgroundFlush() found work to do, it would take care
> of notifying, but I don't think we can depend on that. However, since
> we're primarily using the WAL writer to take care of the case when the
> record has already been flushed,
On 2021-Aug-18, Bossart, Nathan wrote:
> I'll add it after XLogBackgroundFlush().
I was wondering which would be better: before or after.
XLogBackgroundFlush would do it anyway, so if you do it after then it's
not clear to me that it'd do anything (I mean we should not do any new
calls of
On 8/18/21, 10:48 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-18, alvhe...@alvh.no-ip.org wrote:
>
>> I realize this means there's a contradiction with my previous argument,
>> in that synchronous transaction commit calls XLogWrite at some point, so
>> we *are* putting the client-connected
On 2021-Aug-18, alvhe...@alvh.no-ip.org wrote:
> I realize this means there's a contradiction with my previous argument,
> in that synchronous transaction commit calls XLogWrite at some point, so
> we *are* putting the client-connected backend in charge of creating the
> notify files. However,
On 2021-Aug-18, Bossart, Nathan wrote:
> On 8/18/21, 10:06 AM, "alvhe...@alvh.no-ip.org"
> wrote:
> > So that comment suggests that we should give the responsibility to bgwriter.
> > This seems good enough to me. I suppose if bgwriter has a long run of
> > buffers to write it could take a
On 8/18/21, 10:06 AM, "alvhe...@alvh.no-ip.org" wrote:
> So that comment suggests that we should give the responsibility to bgwriter.
> This seems good enough to me. I suppose if bgwriter has a long run of
> buffers to write it could take a little bit of time (a few hundred
> milliseconds?) but
On 2021-Aug-17, alvhe...@alvh.no-ip.org wrote:
> However, why do it in a WAL-producing client-connected backend? It
> strikes me as a bad thing to do, because you are possibly causing delays
> for client-connected backends. I suggest that we should give this task
> to the WAL writer process --
On 2021-Aug-17, Bossart, Nathan wrote:
> On 8/17/21, 2:13 PM, "alvhe...@alvh.no-ip.org"
> wrote:
>
> > So, why isn't it that we call Register in XLogInsertRecord, and
> > Notify in XLogWrite?
>
> We do. However, we also call NotifySegmentsReadyForArchive() in
> XLogInsertRecord() to handle the
On 8/17/21, 2:13 PM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-17, Bossart, Nathan wrote:
>
>> The main reason for registering the boundaries in XLogInsertRecord()
>> is that it has the required information about the WAL record
>> boundaries. I do not think that XLogWrite() has this
On 2021-Aug-17, Bossart, Nathan wrote:
> The main reason for registering the boundaries in XLogInsertRecord()
> is that it has the required information about the WAL record
> boundaries. I do not think that XLogWrite() has this information.
Doh, of course. So, why isn't it that we call
The thing I still don't understand about this patch is why we call
RegisterSegmentBoundaryEntry and NotifySegmentsReadyForArchive in
XLogInsertRecord.
My model concept of this would have these routines called only in
XLogFlush / XLogWrite, which are conceptually about transferring data
from
On 2021-Aug-17, Bossart, Nathan wrote:
> I think we are in agreement. If we assume that the flush pointer
> jumps along record boundaries and segment boundaries, the solution
> would be to avoid using the flush pointer when it points to a segment
> boundary (given that the segment boundary is
On 8/17/21, 10:44 AM, "alvhe...@alvh.no-ip.org" wrote:
> On 2021-Aug-17, Bossart, Nathan wrote:
>> I've been thinking about the next steps for this one, too. ISTM we'll
>> need to basically assume that the flush pointer jumps along record
>> boundaries except for the cross-segment records. I
On 2021-Aug-17, Bossart, Nathan wrote:
> On 8/16/21, 5:09 PM, "alvhe...@alvh.no-ip.org"
> wrote:
> > The reason for the latter is that I suspect the segment boundary map
> > will also have a use to fix the streaming replication issue I mentioned
> > earlier in the thread. This also makes me
So why do we call this structure "record boundary map", when the
boundaries it refers to are segment boundaries? I think we should call
it "segment boundary map" instead ... and also I think we should use
that name in the lock that protects it, so instead of ArchNotifyLock, it
could be
At Fri, 6 Aug 2021 00:21:34 +, "Bossart, Nathan"
wrote in
> On 8/5/21, 12:39 AM, "Kyotaro Horiguchi" wrote:
> > Honestly I don't like having this initialization in XLogWrite. We
> > should and I think can initialize it earlier. It seems to me the most
> > appropriate timing to initialize
At Thu, 5 Aug 2021 05:15:04 +, "Bossart, Nathan"
wrote in
> On 8/4/21, 9:05 PM, "Kyotaro Horiguchi" wrote:
> > By the way about the v3 patch,
> >
> > +#define InvalidXLogSegNo ((XLogSegNo) 0x)
> >
> > Like InvalidXLogRecPtr, the first valid segment number is 1 so we
On 8/4/21, 9:05 PM, "Kyotaro Horiguchi" wrote:
> By the way about the v3 patch,
>
> +#define InvalidXLogSegNo ((XLogSegNo) 0x)
>
> Like InvalidXLogRecPtr, the first valid segment number is 1 so we can
> use 0 as InvalidXLogSegNo.
It's been a while since I wrote this, but if
By the way about the v3 patch,
+#define InvalidXLogSegNo ((XLogSegNo) 0x)
Like InvalidXLogRecPtr, the first valid segment number is 1 so we can
use 0 as InvalidXLogSegNo.
BootStrapXLOG():
/* Create first XLOG segment file */
openLogFile = XLogFileInit(1);
At Tue, 3 Aug 2021 21:32:18 +, "Bossart, Nathan"
wrote in
> On 8/2/21, 7:37 PM, "Kyotaro Horiguchi" wrote:
> > I'm afraid that using hash to store boundary info is too much. Isn't a
> > ring buffer enough for this use? In that case it is enough to
> > remember only the end LSN of the
On 8/2/21, 7:37 PM, "Kyotaro Horiguchi" wrote:
> I'm afraid that using hash to store boundary info is too much. Isn't a
> ring buffer enough for this use? In that case it is enough to
> remember only the end LSN of the segment spanning records. It is easy
> to expand the buffer if needed.
I
At Mon, 2 Aug 2021 23:28:19 +, "Bossart, Nathan"
wrote in
> On 8/2/21, 2:42 PM, "Alvaro Herrera" wrote:
> > I think it's okay to make lastNotifiedSeg protected by just info_lck,
> > and RecordBoundaryMap protected by just ArchNotifyLock. It's okay to
> > acquire the spinlock inside the
On 8/2/21, 2:42 PM, "Alvaro Herrera" wrote:
> I think it's okay to make lastNotifiedSeg protected by just info_lck,
> and RecordBoundaryMap protected by just ArchNotifyLock. It's okay to
> acquire the spinlock inside the lwlock-protected area, as long as we
> make sure never to do the opposite.
On 2021-Jul-31, Bossart, Nathan wrote:
> This is why I was trying to get away with just using info_lck for
> reading lastNotifiedSeg. ArchNotifyLock is mostly intended to protect
> RecordBoundaryMap. However, since lastNotifiedSeg is used in
> functions like GetLatestRecordBoundarySegment()
On 7/31/21, 9:12 AM, "Alvaro Herrera" wrote:
> On 2021-Jul-31, Bossart, Nathan wrote:
>
>> On 7/30/21, 4:52 PM, "Alvaro Herrera" wrote:
>
>> > I noticed that XLogCtl->lastNotifiedSeg is protected by both the
>> > info_lck and ArchNotifyLock. I think it it's going to be protected by
>> > the
On 2021-Jul-31, Bossart, Nathan wrote:
> On 7/30/21, 4:52 PM, "Alvaro Herrera" wrote:
> > I noticed that XLogCtl->lastNotifiedSeg is protected by both the
> > info_lck and ArchNotifyLock. I think it it's going to be protected by
> > the lwlock, then we should drop the use of the spinlock.
>
>
On 7/30/21, 4:52 PM, "Alvaro Herrera" wrote:
> I think that creating files out of order might be problematic. On the
> archiver side, pgarch_readyXlog() expects to return the oldest
> archivable file; but if we create a newer segment's .ready file first,
> it is possible that a directory scan
On 2021-Jul-30, Alvaro Herrera wrote:
> We set archiver's latch on each XLogArchiveNotify(), but if we're doing
> it in a loop such as in NotifySegmentsReadyForArchive() perhaps it is
> better to create all the .ready files first and do PgArchWakeup() at the
> end. I'm not convinced that this is
On 2021-Jul-30, Bossart, Nathan wrote:
> Yes, that was what I was worried about. However, I just performed a
> variety of tests with just 0001 applied, and I am beginning to suspect
> my concerns were unfounded. With wal_buffers set very high,
> synchronous_commit set to off, and a long sleep
On 7/30/21, 3:23 PM, "Alvaro Herrera" wrote:
> That's great to hear. I'll give 0001 a look again.
Much appreciated.
Nathan
On 2021-Jul-30, Bossart, Nathan wrote:
> On 7/30/21, 11:34 AM, "Alvaro Herrera" wrote:
> > Hmm ... I'm not sure we're prepared to backpatch this kind of change.
> > It seems a bit too disruptive to how replay works. I think patch we
> > should be focusing solely on patch 0001 to surgically fix
On 7/30/21, 11:34 AM, "Alvaro Herrera" wrote:
> Hmm ... I'm not sure we're prepared to backpatch this kind of change.
> It seems a bit too disruptive to how replay works. I think patch we
> should be focusing solely on patch 0001 to surgically fix the precise
> bug you see. Does patch 0002
On 2021-Jul-28, Bossart, Nathan wrote:
> On 7/27/21, 6:05 PM, "Alvaro Herrera" wrote:
> > I'm not sure I understand what's the reason not to store this value in
> > pg_control; I feel like I'm missing something. Can you please explain?
>
> Thanks for taking a look.
>
> The only reason I can
On 7/27/21, 6:05 PM, "Alvaro Herrera" wrote:
> On 2021-Feb-19, Bossart, Nathan wrote:
>
>> 0002 adds logic for persisting the last notified segment through
>> crashes. This is needed because a poorly-timed crash could otherwise
>> cause us to skip marking segments as ready-for-archival
On 2021-Feb-19, Bossart, Nathan wrote:
> 0002 adds logic for persisting the last notified segment through
> crashes. This is needed because a poorly-timed crash could otherwise
> cause us to skip marking segments as ready-for-archival altogether.
> This file is only used for primary servers, as
On 1/26/21, 6:36 PM, "Kyotaro Horiguchi" wrote:
> At Tue, 26 Jan 2021 19:13:57 +, "Bossart, Nathan"
> wrote in
>> On 12/17/20, 9:15 PM, "Kyotaro Horiguchi" wrote:
>> > At Thu, 17 Dec 2020 22:20:35 +, "Bossart, Nathan"
>> > wrote in
>> >> On 12/15/20, 2:33 AM, "Kyotaro Horiguchi"
At Tue, 26 Jan 2021 19:13:57 +, "Bossart, Nathan"
wrote in
> On 12/17/20, 9:15 PM, "Kyotaro Horiguchi" wrote:
> > At Thu, 17 Dec 2020 22:20:35 +, "Bossart, Nathan"
> > wrote in
> >> On 12/15/20, 2:33 AM, "Kyotaro Horiguchi" wrote:
> >> > You're right in that regard. There's a window
On 1/2/21, 8:55 AM, "Andrey Borodin" wrote:
> Recently we had somewhat related incident. Do I understand correctly that
> this incident is related to the bug discussed in this thread?
I'm not sure that we can rule it out, but the log pattern I've
typically seen for this is "invalid contrecord
On 12/17/20, 9:15 PM, "Kyotaro Horiguchi" wrote:
> At Thu, 17 Dec 2020 22:20:35 +, "Bossart, Nathan"
> wrote in
>> On 12/15/20, 2:33 AM, "Kyotaro Horiguchi" wrote:
>> > You're right in that regard. There's a window where partial record is
>> > written when write location passes F0 after
Hi!
Thanks for working on this.
> 18 дек. 2020 г., в 10:42, Kyotaro Horiguchi
> написал(а):
>
> I noticed that we can cause the continuation record flushed
> immedately.
I've took a look into the code and want to share some thoughts.
1. Maybe we could tend to avoid interlacing field
Hi!
I was looking to review something in CF. This seems like a thread of some
interest to me.
Recently we had somewhat related incident. Do I understand correctly that this
incident is related to the bug discussed in this thread?
Primary instance was killed by OOM
[ 2020-11-12 15:27:03.732
At Wed, 16 Dec 2020 11:01:20 +0900 (JST), Kyotaro Horiguchi
wrote in
> - Record the beginning LSN of the first cross-seg record and the end
> LSN of the last cross-seg recrod in a consecutive segments bonded by
> cross-seg recrods. Spcifically X and Y below.
>
>X Z
At Thu, 17 Dec 2020 22:20:35 +, "Bossart, Nathan"
wrote in
> On 12/15/20, 2:33 AM, "Kyotaro Horiguchi" wrote:
> > You're right in that regard. There's a window where partial record is
> > written when write location passes F0 after insertion location passes
> > F1. However, remembering all
On 12/15/20, 2:33 AM, "Kyotaro Horiguchi" wrote:
> You're right in that regard. There's a window where partial record is
> written when write location passes F0 after insertion location passes
> F1. However, remembering all spanning records seems overkilling to me.
I'm curious why you feel that
At Tue, 15 Dec 2020 19:32:57 +0900 (JST), Kyotaro Horiguchi
wrote in
> At Mon, 14 Dec 2020 18:25:23 +, "Bossart, Nathan"
> wrote in
> > I wonder if these are safe assumptions to make. For your example, if
> > we've written record B to the WAL buffers, but neither record A nor B
> > have
At Mon, 14 Dec 2020 18:25:23 +, "Bossart, Nathan"
wrote in
> Apologies for the long delay.
>
> I've spent a good amount of time thinking about this bug and trying
> out a few different approaches for fixing it. I've attached a work-
> in-progress patch for my latest attempt.
>
> On
On 14.10.2020 03:06, Kyotaro Horiguchi wrote:
The patch includes a fix for primary->standby case. But I'm not sure
we can do that in the cascaded case. A standby is not aware of the
structure of a WAL blob and has no idea of up-to-where to send the
received blobs. However, if we can rely on the
Thanks for visiting this thread.
At Mon, 12 Oct 2020 15:04:40 +0300, Heikki Linnakangas wrote
in
> On 07/07/2020 12:02, matsumura@fujitsu.com wrote:
> > At Monday, July 6, 2020 05:13:40 +, "Kyotaro Horiguchi
> > " wrote in
> after WAL buffer is filled up to the requested position.
On 07/07/2020 12:02, matsumura@fujitsu.com wrote:
At Monday, July 6, 2020 05:13:40 +, "Kyotaro Horiguchi
" wrote in
after WAL buffer is filled up to the requested position. So when it
crosses segment boundary we know the all past corss segment-boundary
records are stable. That means
On Wed, Jul 22, 2020 at 02:53:49AM +, matsumura@fujitsu.com wrote:
> Then, Record-A may be invalidated by crash-recovery and overwritten by new
> WAL record.
> The segment-X is not same as the archived one.
Please note that the latest patch fails to apply per the CF bot, so a
rebase
Hello,
> At Mon, 13 Jul 2020 01:57:36 +, "Kyotaro Horiguchi
> " wrote in
> Am I missing something here?
I write more detail(*).
Record-A and Record-B are cross segment-border records.
Record-A spans segment X and X+1.
Record-B spans segment X+2 and X+3.
If both records have been
Hello.
# Sorry, I wrongly thought that I replied to this thread..
At Tue, 7 Jul 2020 09:02:56 +, "matsumura@fujitsu.com"
wrote in
> At Monday, July 6, 2020 05:13:40 +, "Kyotaro Horiguchi
> " wrote in
> > > > after WAL buffer is filled up to the requested position. So when it
> >
Hello, Horiguchi-san
At Monday, July 6, 2020 05:13:40 +, "Kyotaro Horiguchi
" wrote in
> > > after WAL buffer is filled up to the requested position. So when it
> > > crosses segment boundary we know the all past corss segment-boundary
> > > records are stable. That means all we need to
Hello, Matsumura-san.
At Mon, 6 Jul 2020 04:02:23 +, "matsumura@fujitsu.com"
wrote in
> Hello, Horiguchi-san
>
> Thank you for your comment and patch.
>
> At Thursday, June 25, 2020 3:36 PM(JST), "Kyotaro Horiguchi
> " wrote in
> > I think we don't need most of that shmem stuff.
Hello, Horiguchi-san
Thank you for your comment and patch.
At Thursday, June 25, 2020 3:36 PM(JST), "Kyotaro Horiguchi
" wrote in
> I think we don't need most of that shmem stuff. XLogWrite is called
I wanted no more shmem stuff too, but other ideas need more lock
that excludes inserter and
Hello. Matsumura-san.
I agree that WAL writer is not the place to notify segmnet. And the
direction you suggested would work.
At Fri, 19 Jun 2020 10:18:34 +, "matsumura@fujitsu.com"
wrote in
> 1. Description in primary side
>
> [Basic problem]
> A process flushing WAL record
> On 5/28/20, 11:42 PM, "matsumura@fujitsu.com"
> wrote:
> > I'm preparing a patch that backend inserting segment-crossboundary
> > WAL record leaves its EndRecPtr and someone flushing it checks
> > the EndRecPtr and notifies..
I'm sorry for my slow work.
I attach a patch.
I also attach a
On 5/28/20, 11:42 PM, "matsumura@fujitsu.com"
wrote:
> I'm preparing a patch that backend inserting segment-crossboundary
> WAL record leaves its EndRecPtr and someone flushing it checks
> the EndRecPtr and notifies..
Thank you for sharing your thoughts. I will be happy to take a look
at
1 - 100 of 111 matches
Mail list logo