On Fri, Dec 20, 2019 at 04:30:38PM +0100, Antonin Houska wrote:
> I wanted to register the patch for the next CF so it's not forgotten, but see
> it's already there. Why have you set the status to "withdrawn"?
Because my patch was incorrect, and I did not make enough bandwidth to
think more on
Antonin Houska wrote:
> Michael Paquier wrote:
>
> > On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:
> > > This looks good to me.
> >
> > Actually, no, this is not good. I have been studying more the patch,
> > and after stressing more this code path with a cluster having
> >
Michael Paquier wrote:
> On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:
> > This looks good to me.
>
> Actually, no, this is not good. I have been studying more the patch,
> and after stressing more this code path with a cluster having
> checksums enabled and shared_buffers at
At Thu, 14 Nov 2019 12:01:29 +0900, Michael Paquier wrote
in
> On Wed, Nov 13, 2019 at 09:17:03PM +0900, Michael Paquier wrote:
> > Actually, no, this is not good. I have been studying more the patch,
> > and after stressing more this code path with a cluster having
> > checksums enabled and
On Wed, Nov 13, 2019 at 09:17:03PM +0900, Michael Paquier wrote:
> Actually, no, this is not good. I have been studying more the patch,
> and after stressing more this code path with a cluster having
> checksums enabled and shared_buffers at 1MB, I have been able to make
> a couple of page's LSNs
On Mon, Nov 11, 2019 at 10:03:14AM +0100, Antonin Houska wrote:
> This looks good to me.
Actually, no, this is not good. I have been studying more the patch,
and after stressing more this code path with a cluster having
checksums enabled and shared_buffers at 1MB, I have been able to make
a
Kyotaro Horiguchi wrote:
> At Mon, 11 Nov 2019 10:03:14 +0100, Antonin Houska wrote
> in
> > Michael Paquier wrote:
> > > Does something like the attached patch make sense? Reviews are
> > > welcome.
> >
> > This looks good to me.
>
> I have a qustion.
>
> The current code assumes that
At Mon, 11 Nov 2019 10:03:14 +0100, Antonin Houska wrote in
> Michael Paquier wrote:
> > Does something like the attached patch make sense? Reviews are
> > welcome.
>
> This looks good to me.
I have a qustion.
The current code assumes that !BM_DIRTY means that the function is
dirtying the
Michael Paquier wrote:
> Does something like the attached patch make sense? Reviews are
> welcome.
This looks good to me.
--
Antonin Houska
Web: https://www.cybertec-postgresql.com
On Thu, Oct 31, 2019 at 09:43:47AM +0100, Antonin Houska wrote:
> Tomas Vondra wrote:
>> Isn't this prevented by locking of the buffer header? Both FlushBuffer
>> and MarkBufferDirtyHint do obtain that lock. I see MarkBufferDirtyHint
>> does a bit of work before, but that's related to
Robert Haas wrote:
> On Wed, Oct 30, 2019 at 9:43 AM Antonin Houska wrote:
> > 5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear
> > BM_DIRTY because MarkBufferDirtyHint() has eventually set
> > BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the
On Wed, Oct 30, 2019 at 9:43 AM Antonin Houska wrote:
> 5. In the first session, FlushBuffer()->TerminateBufferIO() will not clear
> BM_DIRTY because MarkBufferDirtyHint() has eventually set
> BM_JUST_DIRTIED. Thus the hint bit change itself will be written by the next
> call of FlushBuffer().
Tomas Vondra wrote:
> On Wed, Oct 30, 2019 at 02:44:18PM +0100, Antonin Houska wrote:
> >Please consider this scenario (race conditions):
> >
> >1. FlushBuffer() has written the buffer but hasn't yet managed to clear the
> >BM_DIRTY flag (however BM_JUST_DIRTIED could be cleared by now).
> >
>
On Wed, Oct 30, 2019 at 02:44:18PM +0100, Antonin Houska wrote:
Please consider this scenario (race conditions):
1. FlushBuffer() has written the buffer but hasn't yet managed to clear the
BM_DIRTY flag (however BM_JUST_DIRTIED could be cleared by now).
2. Another backend modified a hint bit
14 matches
Mail list logo