On Mon, Apr 1, 2019 at 7:48 PM Thomas Munro wrote:
> On Sat, Mar 30, 2019 at 11:18 AM Jerry Jelinek
> wrote:
> > I went through your new version of the patch and it all looks great to
> me.
>
> I moved the error handling logic around a bit so we'd capture errno
> immediately after the syscalls.
On Sat, Mar 30, 2019 at 11:18 AM Jerry Jelinek wrote:
> I went through your new version of the patch and it all looks great to me.
I moved the error handling logic around a bit so we'd capture errno
immediately after the syscalls. I also made a couple of further
tweaks to comments and removed
On Fri, Mar 29, 2019 at 01:09:46PM +1300, Thomas Munro wrote:
...
I still don't know why exactly this happens, but it's clearly a real
phenomenon. As for why Tomas Vondra couldn't see it, I'm guessing
that stacks more RAM and ~500k IOPS help a lot (essentially the
opposite end of the memory,
On Thu, Mar 28, 2019 at 6:10 PM Thomas Munro wrote:
> On Fri, Mar 29, 2019 at 10:47 AM Thomas Munro
> wrote:
> > On Fri, Mar 29, 2019 at 8:59 AM Robert Haas
> wrote:
> > > On Tue, Mar 26, 2019 at 3:24 PM Jerry Jelinek <
> jerry.jeli...@joyent.com> wrote:
> > > > The latest patch is rebased,
On 2019-03-29 01:09, Thomas Munro wrote:
>> I would like to fix these problems and commit the patch. First, I'm
>> going to go and do some project-style tidying, write some proposed doc
>> tweaks, and retest these switches on the machine where I saw
>> beneficial effects from the patch before.
On Fri, Mar 29, 2019 at 10:47 AM Thomas Munro wrote:
> On Fri, Mar 29, 2019 at 8:59 AM Robert Haas wrote:
> > On Tue, Mar 26, 2019 at 3:24 PM Jerry Jelinek
> > wrote:
> > > The latest patch is rebased, builds clean, and passes some basic testing.
> > > Please let me know if there is anything
On Fri, Mar 29, 2019 at 8:59 AM Robert Haas wrote:
> On Tue, Mar 26, 2019 at 3:24 PM Jerry Jelinek
> wrote:
> > The latest patch is rebased, builds clean, and passes some basic testing.
> > Please let me know if there is anything else I could do on this.
>
> I agree with Thomas Munro's earlier
On Tue, Mar 26, 2019 at 3:24 PM Jerry Jelinek wrote:
> The latest patch is rebased, builds clean, and passes some basic testing.
> Please let me know if there is anything else I could do on this.
I agree with Thomas Munro's earlier critique of the documentation.
The documentation of the new
On Thu, Mar 7, 2019 at 6:26 PM Thomas Munro wrote:
> On Fri, Mar 8, 2019 at 12:35 PM Jerry Jelinek
> wrote:
> > On Thu, Mar 7, 2019 at 3:09 PM Thomas Munro
> wrote:
> >> My understanding is that it's not really the COW-ness that makes it
> >> not necessary, it's the fact that fdatasync()
On Fri, Mar 8, 2019 at 12:35 PM Jerry Jelinek wrote:
> On Thu, Mar 7, 2019 at 3:09 PM Thomas Munro wrote:
>> My understanding is that it's not really the COW-ness that makes it
>> not necessary, it's the fact that fdatasync() doesn't do anything
>> different from fsync() on ZFS and there is no
Thomas,
Responses in-line.
On Thu, Mar 7, 2019 at 3:09 PM Thomas Munro wrote:
> On Fri, Mar 8, 2019 at 10:13 AM Jerry Jelinek
> wrote:
> > I have attached a new version of the patch that implements the changes
> we've discussed over the past couple of days. Let me know if there are any
>
On Fri, Mar 8, 2019 at 10:13 AM Jerry Jelinek wrote:
> I have attached a new version of the patch that implements the changes we've
> discussed over the past couple of days. Let me know if there are any comments
> or suggestions.
+fail = lseek(fd, wal_segment_size - 1, SEEK_SET) <
On Wed, Mar 6, 2019 at 4:14 PM Jerry Jelinek
wrote:
>
> It sounds like everyone is in agreement that I should get rid of the
> single COW GUC tunable and provide two different tunables instead. I will
> update the patch to go back to the original name (wal_recycle) for the
> original WAL
On Wed, Mar 6, 2019 at 11:02 AM Alvaro Herrera
wrote:
> On 2019-Mar-06, Robert Haas wrote:
>
> > On Wed, Mar 6, 2019 at 12:13 PM Alvaro Herrera
> wrote:
> > > I want your dictating software.
> >
> > I'm afraid this is just me and a keyboard, but sadly for me you're not
> > the first person to
On Wed, Mar 6, 2019 at 1:02 PM Alvaro Herrera wrote:
> Well, I don't have a problem reading long texts; my problem is that I'm
> unable to argue as quickly.
That's my secret weapon... except that it's not much of a secret.
> I do buy your argument, though (if reluctantly); in particular I was
>
On 2019-Mar-06, Robert Haas wrote:
> On Wed, Mar 6, 2019 at 12:13 PM Alvaro Herrera
> wrote:
> > I want your dictating software.
>
> I'm afraid this is just me and a keyboard, but sadly for me you're not
> the first person to accuse me of producing giant walls of text.
Well, I don't have a
On Wed, Mar 6, 2019 at 12:13 PM Alvaro Herrera wrote:
> I want your dictating software.
I'm afraid this is just me and a keyboard, but sadly for me you're not
the first person to accuse me of producing giant walls of text.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
I want your dictating software.
--
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Mar 6, 2019 at 11:41 AM Alvaro Herrera wrote:
> I can understand this argument. Is there really a reason to change
> those two behaviors separately?
See my previous rely to Andrew, but also, I think you're putting the
burden of proof in the wrong place. You could equally well ask "Is
On Wed, Mar 6, 2019 at 11:37 AM Andrew Dunstan
wrote:
> Well, let's put the question another way. Is there any reason to allow
> skipping zero filling if we are recycling? That seems possibly
> dangerous. I can imagine turning off recycling but leaving on
> zero-filling, although I don't have a
On 2019-Mar-06, Robert Haas wrote:
> On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera
> wrote:
> > I think the idea of it being a generic tunable for assorted behavior
> > changes, rather than specific to WAL recycling, is a good one. I'm
> > unsure about your proposed name -- maybe
On 3/6/19 11:30 AM, Robert Haas wrote:
> On Wed, Mar 6, 2019 at 10:55 AM Andrew Dunstan
> wrote:
>>> I *really* dislike this. For one thing, it means that users don't
>>> have control over the behaviors individually. For another, the
>>> documentation is now quite imprecise about what the
On Wed, Mar 6, 2019 at 10:55 AM Andrew Dunstan
wrote:
> > I *really* dislike this. For one thing, it means that users don't
> > have control over the behaviors individually. For another, the
> > documentation is now quite imprecise about what the option actually
> > does, while expecting users
On 3/6/19 10:38 AM, Robert Haas wrote:
> On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera
> wrote:
>> I think the idea of it being a generic tunable for assorted behavior
>> changes, rather than specific to WAL recycling, is a good one. I'm
>> unsure about your proposed name -- maybe
On Wed, Feb 27, 2019 at 6:12 PM Alvaro Herrera wrote:
> I think the idea of it being a generic tunable for assorted behavior
> changes, rather than specific to WAL recycling, is a good one. I'm
> unsure about your proposed name -- maybe "wal_cow_filesystem" is better?
I *really* dislike this.
Jerry,
On 2019-Mar-05, Jerry Jelinek wrote:
> Thanks again for your review. I went through your proposed patch diffs and
> applied most of them to my original changes. I did a few things slightly
> differently since I wanted to keep to to 80 columns for the source code,
> but I can revisit that
Alvaro,
Thanks again for your review. I went through your proposed patch diffs and
applied most of them to my original changes. I did a few things slightly
differently since I wanted to keep to to 80 columns for the source code,
but I can revisit that if it is not an issue. I also cleaned up the
Alvaro,
Thanks for taking a look at the new patch. I'll update the patch to change
the name of the tunable to match your suggestion and I'll also go through
the cleanup you suggested. Finally, I'll try to rewrite the doc to
eliminate the confusion around the wording about allocating new blocks on
On 2019-Feb-05, Jerry Jelinek wrote:
> First, since last fall, we have found another performance problem related
> to initializing WAL files. I've described this issue in more detail below,
> but in order to handle this new problem, I decided to generalize the patch
> so the tunable refers to
On Mon, Oct 1, 2018 at 7:16 PM Michael Paquier wrote:
> On Thu, Sep 13, 2018 at 02:56:42PM -0600, Jerry Jelinek wrote:
> > I'll take a look at that. I had been trying to keep the patch as minimal
> as
> > possible, but I'm happy to work through this.
>
> (Please be careful with top-posting)
>
>
On Thu, Sep 13, 2018 at 02:56:42PM -0600, Jerry Jelinek wrote:
> I'll take a look at that. I had been trying to keep the patch as minimal as
> possible, but I'm happy to work through this.
(Please be careful with top-posting)
Jerry, the last status was from three weeks ago with the patch waiting
Hi Peter,
I'll take a look at that. I had been trying to keep the patch as minimal as
possible, but I'm happy to work through this.
Thanks,
Jerry
On Tue, Sep 11, 2018 at 9:39 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 10/09/2018 16:10, Jerry Jelinek wrote:
> > Thank
On 10/09/2018 16:10, Jerry Jelinek wrote:
> Thank you again for running all of these tests on your various hardware
> configurations. I was not aware of the convention that the commented
> example in the config file is expected to match the default value, so I
> was actually trying to show what to
Tomas,
Thank you again for running all of these tests on your various hardware
configurations. I was not aware of the convention that the commented
example in the config file is expected to match the default value, so I was
actually trying to show what to use if you didn't want the default, but I
Hi,
So here is the last set of benchmark results, this time from ext4 on a
small SATA-based RAID (3 x 7.2k). As before, I'm only attaching PDFs
with the simple charts, full results are available in the git repository
[1]. Overall the numbers are rather boring, with almost no difference
between
On 08/27/2018 03:59 AM, Thomas Munro wrote:
> On Mon, Aug 27, 2018 at 10:14 AM Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>> zfs (Linux)
>> ---
>> On scale 200, there's pretty much no difference.
>
> Speculation: It could be that the dnode and/or indirect blocks that
>
Tomas,
This is really interesting data, thanks a lot for collecting all of it and
formatting the helpful graphs.
Jerry
On Sun, Aug 26, 2018 at 4:14 PM, Tomas Vondra
wrote:
>
>
> On 08/25/2018 12:11 AM, Jerry Jelinek wrote:
> > Alvaro,
> >
> > I have previously posted ZFS numbers for SmartOS
On Mon, Aug 27, 2018 at 10:14 AM Tomas Vondra
wrote:
> zfs (Linux)
> ---
> On scale 200, there's pretty much no difference.
Speculation: It could be that the dnode and/or indirect blocks that point
to data blocks are falling out of memory in my test setup[1] but not in
yours. I don't
On 08/25/2018 12:11 AM, Jerry Jelinek wrote:
> Alvaro,
>
> I have previously posted ZFS numbers for SmartOS and FreeBSD to this
> thread, although not with the exact same benchmark runs that Tomas did.
>
> I think the main purpose of running the benchmarks is to demonstrate
> that there is no
Alvaro,
I have previously posted ZFS numbers for SmartOS and FreeBSD to this
thread, although not with the exact same benchmark runs that Tomas did.
I think the main purpose of running the benchmarks is to demonstrate that
there is no significant performance regression with wal recycling
On 2018-Aug-22, Andres Freund wrote:
> On 2018-08-22 11:06:17 -0300, Alvaro Herrera wrote:
> > I suppose that the use case that was initially proposed (ZFS) has not
> > yet been tested so we shouldn't reject this patch immediately, but
> > perhaps what Joyent people should be doing now is
On 2018-08-22 11:06:17 -0300, Alvaro Herrera wrote:
> On 2018-Aug-21, Jerry Jelinek wrote:
>
> > Tomas,
> >
> > Thanks for doing all of this testing. Your testing and results are much
> > more detailed than anything I did. Please let me know if there is any
> > follow-up that I should attempt.
>
On 2018-Aug-21, Jerry Jelinek wrote:
> Tomas,
>
> Thanks for doing all of this testing. Your testing and results are much
> more detailed than anything I did. Please let me know if there is any
> follow-up that I should attempt.
Either I completely misread these charts, or there is practically
On Mon, Jul 30, 2018 at 4:43 AM, Peter Eisentraut
wrote:
> On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
>> My result is that we cannot disable recycling perfectly just by
>> setting min/max_wal_size.
>
> Maybe the behavior of min_wal_size should be rethought? Elsewhere in
> this thread, there
At Mon, 30 Jul 2018 10:43:20 +0200, Peter Eisentraut
wrote in
> On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
> > My result is that we cannot disable recycling perfectly just by
> > setting min/max_wal_size.
>
> Maybe the behavior of min_wal_size should be rethought? Elsewhere in
> this
On 19/07/2018 05:59, Kyotaro HORIGUCHI wrote:
> My result is that we cannot disable recycling perfectly just by
> setting min/max_wal_size.
Maybe the behavior of min_wal_size should be rethought? Elsewhere in
this thread, there was also a complaint that max_wal_size isn't actually
a max. It
I've setup FreeBSD 11.1 in a VM and I setup a ZFS filesystem to use for the
Postgres DB. I ran the following simple benchmark.
pgbench -M prepared -c 4 -j 4 -T 60 postgres
Since it is in a VM and I can't control what else might be happening on the
box, I ran this several times at different times
On 07/21/2018 12:04 AM, Jerry Jelinek wrote:
> Thomas,
>
> Thanks for your offer to run some tests on different OSes and
> filesystems that you have. Anything you can provide here would be much
> appreciated. I don't have anything other than our native SmartOS/ZFS
> based configurations, but I
Thomas,
Thanks for your offer to run some tests on different OSes and filesystems
that you have. Anything you can provide here would be much appreciated. I
don't have anything other than our native SmartOS/ZFS based configurations,
but I might be able to setup some VMs and get results that way.
Peter,
Thanks for your feedback. I'm happy to change the name of the tunable or to
update the man page in any way. I have already posted an updated patch
with changes to the man page which I think may address your concerns there,
but please let me know if that still needs more work. It looks
Hi Robert,
I'm new to the Postgresql community, so I'm not familiar with how patches
are accepted here. Thanks for your detailed explanation. I do want to keep
pushing on this. I'll respond separately to Peter and to Tomas regarding
their emails.
Thanks again,
Jerry
On Wed, Jul 18, 2018 at
At Thu, 19 Jul 2018 12:59:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20180719.125926.257896670.horiguchi.kyot...@lab.ntt.co.jp>
> At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
> wrote in
>
At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20180719.123726.00899102.horiguchi.kyot...@lab.ntt.co.jp>
> At Tue, 17 Jul 2018 21:01:03 -0400, Robert Haas wrote
> in
> > On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
> > wrote:
> > > The actual
At Thu, 19 Jul 2018 12:37:26 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI
wrote in
<20180719.123726.00899102.horiguchi.kyot...@lab.ntt.co.jp>
> While considering this, I found a bug in 4b0d28de06, which
> removed prior checkpoint from control file. It actually trims the
> segments before the
At Tue, 17 Jul 2018 21:01:03 -0400, Robert Haas wrote
in
> On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
> wrote:
> > The actual implementation could use another round of consideration. I
> > wonder how this should interact with min_wal_size. Wouldn't
> > min_wal_size = 0 already do what
On Tue, Jul 17, 2018 at 4:47 PM, Tomas Vondra
wrote:
> Makes sense, I guess. But I think many claims made in this thread are
> mostly just assumptions at this point, based on our beliefs how CoW or
> non-CoW filesystems work. The results from ZFS (showing positive impact)
> are an exception, but
On Wed, Jul 18, 2018 at 3:22 PM, Jerry Jelinek wrote:
> I've gotten a wide variety of feedback on the proposed patch. The comments
> range from rough approval through various discussion about alternative
> solutions. At this point I am unsure if this patch is rejected or if it
> would be accepted
I've gotten a wide variety of feedback on the proposed patch. The comments
range from rough approval through various discussion about alternative
solutions. At this point I am unsure if this patch is rejected or if it
would be accepted once I had the updated man page changes that were
discussed
On Tue, Jul 17, 2018 at 3:12 PM, Peter Eisentraut
wrote:
> The actual implementation could use another round of consideration. I
> wonder how this should interact with min_wal_size. Wouldn't
> min_wal_size = 0 already do what we need (if you could set it to 0,
> which is currently not
On 07/17/2018 09:12 PM, Peter Eisentraut wrote:
> On 17.07.18 00:04, Jerry Jelinek wrote:
>> There have been quite a few comments since last week, so at this point I
>> am uncertain how to proceed with this change. I don't think I saw
>> anything concrete in the recent emails that I can act upon.
On 17.07.18 00:04, Jerry Jelinek wrote:
> There have been quite a few comments since last week, so at this point I
> am uncertain how to proceed with this change. I don't think I saw
> anything concrete in the recent emails that I can act upon.
The outcome of this could be multiple orthogonal
On Mon, Jul 16, 2018 at 10:38:14AM -0400, Robert Haas wrote:
> It's been a few years since I tested this, but my recollection is that
> if you fill up pg_xlog, the system will PANIC and die on a vanilla
> Linux install. Sure, you can set max_wal_size, but that's a soft
> limit, not a hard limit,
There have been quite a few comments since last week, so at this point I am
uncertain how to proceed with this change. I don't think I saw anything
concrete in the recent emails that I can act upon.
I would like to respond to the comment about trying to "self-tune" the
behavior based on
On Mon, Jul 16, 2018 at 10:12 AM, Tom Lane wrote:
> But anyway, this means we have two nearly independent issues to
> investigate: whether recycling/renaming old files is cheaper than
> constantly creating and deleting them, and whether to use physical
> file zeroing versus some "just set the EOF
On 2018-07-15 20:32:39 -0400, Robert Haas wrote:
> On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund wrote:
> > This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
> > performance impact of non COW filesystems, and very likely even negative
> > impacts in a number of COWed scenarios
On 07/16/2018 04:54 AM, Stephen Frost wrote:
Greetings,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
I think that the right basic idea is to have a GUC that chooses between
the two implementations, but whether it can be set automatically is not
clear to me. Can initdb perhaps investigate what kind
Greetings,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> I think that the right basic idea is to have a GUC that chooses between
> the two implementations, but whether it can be set automatically is not
> clear to me. Can initdb perhaps investigate what kind of filesystem the
> WAL directory is
Robert Haas writes:
> On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund wrote:
>> This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
>> performance impact of non COW filesystems, and very likely even negative
>> impacts in a number of COWed scenarios (when there's enough memory to
On Thu, Jul 5, 2018 at 4:39 PM, Andres Freund wrote:
> This is formulated *WAY* too positive. It'll have dramatic *NEGATIVE*
> performance impact of non COW filesystems, and very likely even negative
> impacts in a number of COWed scenarios (when there's enough memory to
> keep all WAL files in
Thanks to everyone who has taken the time to look at this patch and provide
all of the feedback.
I'm going to wait another day to see if there are any more comments. If
not, then first thing next week, I will send out a revised patch with
improvements to the man page change as requested. If
On Thu, Jul 12, 2018 at 10:52 PM, Tomas Vondra
wrote:
> I don't follow Alvaro's reasoning, TBH. There's a couple of things that
> confuse me ...
>
> I don't quite see how reusing WAL segments actually protects against full
> filesystem? On "traditional" filesystems I would not expect any
I was asked to perform two different tests:
1) A benchmarksql run with WAL recycling on and then off, for comparison
2) A test when the filesystem fills up
For #1, I did two 15 minute benchmarksql runs and here are the results.
wal_recycle=on
--
Term-00, Running Average tpmTOTAL:
On 07/12/2018 02:25 AM, David Pacheco wrote:
On Tue, Jul 10, 2018 at 1:34 PM, Alvaro Herrera
mailto:alvhe...@2ndquadrant.com>> wrote:
On 2018-Jul-10, Jerry Jelinek wrote:
> 2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is
On Tue, Jul 10, 2018 at 10:32 PM, Thomas Munro <
thomas.mu...@enterprisedb.com> wrote:
> On Wed, Jul 11, 2018 at 8:25 AM, Joshua D. Drake
> wrote:
> > On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
> >>
> >> Thanks to everyone who took the time to look at the patch and send me
> >> feedback. I'm
Hi,
On 2018-07-10 14:15:30 -0600, Jerry Jelinek wrote:
> Thanks to everyone who took the time to look at the patch and send me
> feedback. I'm happy to work on improving the documentation of this new
> tunable to clarify when it should be used and the implications. I'm trying
> to understand
On Tue, Jul 10, 2018 at 1:34 PM, Alvaro Herrera
wrote:
> On 2018-Jul-10, Jerry Jelinek wrote:
>
> > 2) Disabling WAL recycling reduces reliability, even on COW filesystems.
>
> I think the problem here is that WAL recycling in normal filesystems
> helps protect the case where filesystem gets
Alvaro,
I'll perform several test runs with various combinations and post the
results.
Thanks,
Jerry
On Tue, Jul 10, 2018 at 2:34 PM, Alvaro Herrera
wrote:
> On 2018-Jul-10, Jerry Jelinek wrote:
>
> > 2) Disabling WAL recycling reduces reliability, even on COW filesystems.
>
> I think the
On Wed, Jul 11, 2018 at 8:25 AM, Joshua D. Drake wrote:
> On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
>>
>> Thanks to everyone who took the time to look at the patch and send me
>> feedback. I'm happy to work on improving the documentation of this new
>> tunable to clarify when it should be
On 2018-Jul-10, Jerry Jelinek wrote:
> 2) Disabling WAL recycling reduces reliability, even on COW filesystems.
I think the problem here is that WAL recycling in normal filesystems
helps protect the case where filesystem gets full. If you remove it,
that protection goes out the window. You can
On 07/10/2018 01:15 PM, Jerry Jelinek wrote:
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this
new tunable to clarify when it should be used and the implications.
I'm trying to understand more specifically
Thanks to everyone who took the time to look at the patch and send me
feedback. I'm happy to work on improving the documentation of this new
tunable to clarify when it should be used and the implications. I'm trying
to understand more specifically what else needs to be done next. To
summarize, I
Thomas,
We're using a zfs recordsize of 8k to match the PG blocksize of 8k, so what
you're describing is not the issue here.
Thanks,
Jerry
On Thu, Jul 5, 2018 at 3:44 PM, Thomas Munro
wrote:
> On Fri, Jul 6, 2018 at 3:37 AM, Jerry Jelinek
> wrote:
> >> If the problem is specifically the
On Fri, Jul 6, 2018 at 3:37 AM, Jerry Jelinek
wrote:
>> If the problem is specifically the file system caching behavior, then we
>> could also consider using the dreaded posix_fadvise().
>
> I'm not sure that solves the problem for non-cached files, which is where
> we've observed the performance
Hi,
On 2018-06-26 07:35:57 -0600, Jerry Jelinek wrote:
> +
> + wal_recycle (boolean)
> +
> + wal_recycle configuration
> parameter
> +
> +
> +
> +
> +When this parameter is on, past log file segments
> +in the pg_wal directory are
On 05.07.18 17:37, Jerry Jelinek wrote:
> Your patch describes this feature as a performance feature. We would
> need to see more measurements about what this would do on other
> platforms and file systems than your particular one. Also, we need to
> be careful with user options
Peter,
Thanks for taking a look a this. I have a few responses in line. I am not a
PG expert, so if there is something here that I've misunderstood, please
let me know.
On Sun, Jul 1, 2018 at 6:54 AM, Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:
> On 26.06.18 15:35, Jerry Jelinek
On 26.06.18 15:35, Jerry Jelinek wrote:
> Attached is a patch to provide an option to disable WAL recycling. We
> have found that this can help performance by eliminating
> read-modify-write behavior on old WAL files that are no longer resident
> in the filesystem cache. The is a lot more detail
Hello All,
Attached is a patch to provide an option to disable WAL recycling. We have
found that this can help performance by eliminating read-modify-write
behavior on old WAL files that are no longer resident in the filesystem
cache. The is a lot more detail on the background of the motivation
88 matches
Mail list logo