On Thu, Jan 05, 2023 at 06:36:15PM -0500, Robert Haas wrote:
> Andres asked me off-list if I could take another look at this.
I'm curious whether there are plans to pick this up again. IMHO it seems
like a generally good idea. AFAICT the newest version of the patch is in a
separate thread [0],
Andres asked me off-list if I could take another look at this.
So here's a bit of review:
- The header comment at the top of the file gives some examples of how
the encoding works, and then basically says, oh wait, there's also a
sign bit at the end, so all those examples are actually wrong.
On 8/4/21 7:21 PM, Tomas Vondra wrote:
> On 8/5/21 1:05 AM, Andres Freund wrote:
>
>>
>>> The first one seems quite efficient in how it encodes the length
>>> into very
>>> few bits (which matters especially for small values). It's designed for
>>> integers with 1B, 2B, 4B or 8B, but it can be
On 8/5/21 1:05 AM, Andres Freund wrote:
Hi,
On 2021-08-04 23:44:10 +0200, Tomas Vondra wrote:
How is that better than the two varint flavors that are already out there,
i.e. the bitcoin [1] and protocol buffers [2]?
The protobuf one is *terrible* for CPU efficiency. You need to go through
Hi,
On 2021-08-04 23:44:10 +0200, Tomas Vondra wrote:
> How is that better than the two varint flavors that are already out there,
> i.e. the bitcoin [1] and protocol buffers [2]?
The protobuf one is *terrible* for CPU efficiency. You need to go through each
byte, do masking and shifting for
On 8/4/21 9:01 PM, Andres Freund wrote:
Hi,
On 2021-08-03 14:26:16 -0400, Robert Haas wrote:
[ resurrecting this 2-year-old thread ]
On Fri, Dec 13, 2019 at 12:45 AM Andres Freund wrote:
If baking a new variant integer format now, I think limiting it to 64 bits
is probably a mistake
On Wed, Aug 4, 2021 at 3:46 PM Andres Freund wrote:
> > But what if I have a machine with more than 16 exabytes of RAM and I
> > want to use all of its memory to store one really big integer?
>
> Then the embedded 8 byte length value would just have to do the same thing
> recursively to store
On 2021-08-04 15:37:36 -0400, Robert Haas wrote:
> On Wed, Aug 4, 2021 at 3:01 PM Andres Freund wrote:
> > Extending that to arbitrary lengths obviously at some point makes the
> > encoding
> > in unary wasteful, and the benefit of few branches vanishes. So what I was
> > thinking is that for
On Wed, Aug 4, 2021 at 3:01 PM Andres Freund wrote:
> Extending that to arbitrary lengths obviously at some point makes the encoding
> in unary wasteful, and the benefit of few branches vanishes. So what I was
> thinking is that for variable length pieces of data that are not limited to 8
>
Hi,
On 2021-08-03 14:26:16 -0400, Robert Haas wrote:
> [ resurrecting this 2-year-old thread ]
>
> On Fri, Dec 13, 2019 at 12:45 AM Andres Freund wrote:
> > > If baking a new variant integer format now, I think limiting it to 64 bits
> > > is probably a mistake given how long-lived PostgreSQL
Hi,
On 2021-08-04 09:31:25 -0400, Robert Haas wrote:
> This is pretty integer-centric, though. If your pass-by-value type is
> storing timestamps, for example, they're not likely to be especially
> close to zero. Since a 64-bit address is pretty big, perhaps they're
> still close enough to zero
On Tue, Aug 3, 2021 at 3:32 PM Andres Freund wrote:
> I am now wondering if what we're talking about here would best be thought of
> not as a variable width integer type, but a variable-width encoding for all
> pass-by-value types.
>
> Leaving on-disk compatibility aside (:)), ISTM that we by
Robert Haas writes:
> However, I suspect that the whole approach should be completely
> revised for a user-visible data type. On the one hand, there's no
> telling how large a value some user will want to represent, so
> limiting ourselves to 64 bits does seem shortsighted. And on the othe
>
Hi,
On 2021-08-03 14:26:16 -0400, Robert Haas wrote:
> [ resurrecting this 2-year-old thread ]
> On Fri, Dec 13, 2019 at 12:45 AM Andres Freund wrote:
> > I don't think it's ever going to be sensible to transport 64bit quanta
> > of data. Also, uh, it'd be larger than the data a postgres
[ resurrecting this 2-year-old thread ]
On Fri, Dec 13, 2019 at 12:45 AM Andres Freund wrote:
> > If baking a new variant integer format now, I think limiting it to 64 bits
> > is probably a mistake given how long-lived PostgreSQL is, and how hard it
> > can be to change things in the protocol,
Hi,
On 2019-12-13 13:31:55 +0800, Craig Ringer wrote:
> Am I stabbing completely in the dark when wondering if this might be a step
> towards a way to lift the size limit on VARLENA Datums like bytea ?
It could be - but I think it'd be a pretty small piece of it. But yes, I
have mused about
On Tue, 10 Dec 2019 at 09:51, Andres Freund wrote:
> Hi,
>
> I several times, most recently for the record format in the undo
> patchset, wished for a fast variable width integer implementation for
> postgres. Using very narrow integers, for space efficiency, solves the
> space usage problem,
Hi,
I several times, most recently for the record format in the undo
patchset, wished for a fast variable width integer implementation for
postgres. Using very narrow integers, for space efficiency, solves the
space usage problem, but leads to extensibility / generality problems.
Other cases
18 matches
Mail list logo