On Thu, Oct 20, 2022 at 12:51 AM Andres Freund wrote:
>
> Hi,
>
> On 2022-10-17 17:14:21 -0400, Robert Haas wrote:
> > I have to admit that I worried about the same thing that Matthias
> > raises, more or less. But I don't know whether I'm right to be
> > worried. A variable-length representation
Hi,
On 2022-10-17 17:14:21 -0400, Robert Haas wrote:
> I have to admit that I worried about the same thing that Matthias
> raises, more or less. But I don't know whether I'm right to be
> worried. A variable-length representation of any kind is essentially a
> gamble that values requiring fewer by
On Wed, Oct 12, 2022 at 5:13 PM Andres Freund wrote:
> > I think a signficant part of this improvement comes from the premise
> > of starting with a fresh database. tablespace OID will indeed most
> > likely be low, but database OID may very well be linearly distributed
> > if concurrent workloads
On Wed, 12 Oct 2022 at 23:13, Andres Freund wrote:
>
> Hi,
>
> On 2022-10-12 22:05:30 +0200, Matthias van de Meent wrote:
> > On Wed, 5 Oct 2022 at 01:50, Andres Freund wrote:
> > > I light dusted off my old varint implementation from [1] and converted the
> > > RelFileLocator and BlockNumber fro
Hi,
On 2022-10-12 22:05:30 +0200, Matthias van de Meent wrote:
> On Wed, 5 Oct 2022 at 01:50, Andres Freund wrote:
> > On 2022-10-03 10:01:25 -0700, Andres Freund wrote:
> > > On 2022-10-03 08:12:39 -0400, Robert Haas wrote:
> > > > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund
> > > > wrote:
>
On Mon, Oct 10, 2022 at 5:40 PM Robert Haas wrote:
>
> On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar wrote:
> > I have also executed my original test after applying these patches on
> > top of the 56 bit relfilenode patch. So earlier we saw the WAL size
> > increased by 11% (66199.09375 kB to 7390
Hi,
On 2022-10-10 08:10:22 -0400, Robert Haas wrote:
> On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar wrote:
> > I have also executed my original test after applying these patches on
> > top of the 56 bit relfilenode patch. So earlier we saw the WAL size
> > increased by 11% (66199.09375 kB to 7390
On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar wrote:
> I have also executed my original test after applying these patches on
> top of the 56 bit relfilenode patch. So earlier we saw the WAL size
> increased by 11% (66199.09375 kB to 73906.984375 kB) and after this
> patch now the WAL generated is 5
On Wed, Oct 5, 2022 at 5:19 AM Andres Freund wrote:
>
> I light dusted off my old varint implementation from [1] and converted the
> RelFileLocator and BlockNumber from fixed width integers to varint ones. This
> isn't meant as a serious patch, but an experiment to see if this is a path
> worth p
Hi,
On 2022-10-03 10:01:25 -0700, Andres Freund wrote:
> On 2022-10-03 08:12:39 -0400, Robert Haas wrote:
> > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund wrote:
> > I thought about trying to buy back some space elsewhere, and I think
> > that would be a reasonable approach to getting this commi
On Tue, Oct 4, 2022 at 2:30 PM Andres Freund wrote:
> Consider the following sequence:
>
> 1) we write WAL like this:
>
> [record A][tear boundary][record B, prev A_lsn][tear boundary][record C, prev
> B_lsn]
>
> 2) crash, the sectors with A and C made it to disk, the one with B didn't
>
> 3) We
Hi,
On 2022-10-04 13:36:33 -0400, Robert Haas wrote:
> On Tue, Oct 4, 2022 at 11:34 AM Andres Freund wrote:
> > > Example: Page { [ record A ] | tear boundary | [ record B ] } gets
> > > recycled and receives a new record C at the place of A with the same
> > > length.
> > >
> > > With your propo
On Tue, Oct 4, 2022 at 11:34 AM Andres Freund wrote:
> > Example: Page { [ record A ] | tear boundary | [ record B ] } gets
> > recycled and receives a new record C at the place of A with the same
> > length.
> >
> > With your proposal, record B would still be a valid record when it
> > follows C;
Hi,
On 2022-10-04 15:05:47 +0200, Matthias van de Meent wrote:
> On Mon, 3 Oct 2022 at 23:26, Andres Freund wrote:
> > On 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:
> > > On Mon, 3 Oct 2022, 19:01 Andres Freund, wrote:
> > > > Random idea: xl_prev is large. Store a full xl_prev in t
On Mon, 3 Oct 2022 at 23:26, Andres Freund wrote:
>
> Hi,
>
> On 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:
> > On Mon, 3 Oct 2022, 19:01 Andres Freund, wrote:
> > > Random idea: xl_prev is large. Store a full xl_prev in the page header,
> > > but
> > > only store a 2 byte offset fr
Hi,
On 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:
> On Mon, 3 Oct 2022, 19:01 Andres Freund, wrote:
> > Random idea: xl_prev is large. Store a full xl_prev in the page header, but
> > only store a 2 byte offset from the page header xl_prev within each record.
>
> With that small xl_
On Mon, 3 Oct 2022, 19:01 Andres Freund, wrote:
>
> Hi,
>
> On 2022-10-03 08:12:39 -0400, Robert Haas wrote:
> > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund wrote:
> > > I think it'd be interesting to look at per-record-type stats between two
> > > equivalent workload, to see where practical wo
Hi,
On 2022-10-03 08:12:39 -0400, Robert Haas wrote:
> On Fri, Sep 30, 2022 at 8:20 PM Andres Freund wrote:
> > I think it'd be interesting to look at per-record-type stats between two
> > equivalent workload, to see where practical workloads suffer the most
> > (possibly with fpw=off, to make th
On Fri, Sep 30, 2022 at 8:20 PM Andres Freund wrote:
> I think it'd be interesting to look at per-record-type stats between two
> equivalent workload, to see where practical workloads suffer the most
> (possibly with fpw=off, to make things more repeatable).
I would expect, and Dilip's results se
On Sat, Oct 1, 2022 at 5:50 AM Andres Freund wrote:
>
> Hi,
>
> On 2022-09-30 15:36:11 +0530, Dilip Kumar wrote:
> > I have done some testing around this area to see the impact on WAL
> > size especially when WAL sizes are smaller, with a very simple test
> > with insert/update/delete I can see ar
On Fri, Sep 30, 2022 at 5:20 PM Andres Freund wrote:
> I think it'd be an OK tradeoff to optimize WAL usage for a few of the worst to
> pay off for 56bit relfilenodes. The class of problems foreclosed is large
> enough to "waste" "improvement potential" on this.
I agree overall.
A closely relate
Hi,
On 2022-09-30 15:36:11 +0530, Dilip Kumar wrote:
> I have done some testing around this area to see the impact on WAL
> size especially when WAL sizes are smaller, with a very simple test
> with insert/update/delete I can see around an 11% increase in WAL size
> [1] then I did some more test w
On Thu, Sep 29, 2022 at 2:36 AM Robert Haas wrote:
> 2. WAL Size. Block references in the WAL are by RelFileLocator, so if
> you make RelFileLocators bigger, WAL gets bigger. We'd have to test
> the exact impact of this, but it seems a bit scary
I have done some testing around this area to see t
On Thu, Sep 29, 2022 at 12:24 PM Matthias van de Meent
wrote:
> Currently, our minimal WAL record is exactly 24 bytes: length (4B),
> TransactionId (4B), previous record pointer (8B), flags (1B), redo
> manager (1B), 2 bytes of padding and lastly the 4-byte CRC. Of these
> fields, TransactionID co
On Thu, 29 Sep 2022, 00:06 Robert Haas, wrote:
>
> 2. WAL Size. Block references in the WAL are by RelFileLocator, so if
> you make RelFileLocators bigger, WAL gets bigger. We'd have to test
> the exact impact of this, but it seems a bit scary: if you have a WAL
> stream with few FPIs doing DML on
On Wed, Sep 28, 2022 at 4:08 PM Tom Lane wrote:
> As far as that goes, I'm entirely prepared to accept a conclusion
> that the benefits of widening relfilenodes justify whatever space
> or speed penalties may exist there. However, we cannot honestly
> make that conclusion if we haven't measured s
Robert Haas writes:
> 3. Sinval Message Size. Sinval messages are 16 bytes right now.
> They'll have to grow to 20 bytes if we do this. There's even less room
> for bit-squeezing here than there is for the WAL stuff. I'm skeptical
> that this really matters, but Tom seems concerned.
As far as tha
OK, so the recent commit and revert of the 56-bit relfilenode patch
revealed a few issues that IMHO need design-level input. Let me try to
surface those here, starting a new thread to separate this discussion
from the clutter:
1. Commit Record Alignment. ParseCommitRecord() and ParseAbortRecord()
28 matches
Mail list logo