On Sun, Oct 1, 2023 at 05:30:39AM -0400, Ann Harrison wrote:
> Other databases do allow that sort of gradual migration. One example
> has an internal table of record descriptions indexed the table identifier
> and a description number. Each record includes a header with various
> useful bits
On Sat, 30 Sept 2023, 23:37 Tom Lane, wrote:
>
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
>
An alternative for NOT NULL columns would be
On 10/1/23 12:04, Ireneusz Pluta wrote:
W dniu 30.09.2023 o 07:55, James Healy pisze:
...
We shouldn't have let them get so big, but that's a conversation
for another day.
Some are approaching overflow and we're slowly doing the work to
migrate to bigint. Mostly via the well understood "add a
On Sat, Sep 30, 2023 at 11:37 PM Tom Lane wrote:
> James Healy writes:
> > However it doesn't really address the question of a gradual migration
> > process that can read 32bit ints but insert/update as 64bit bigints. I
> > remain curious about whether the postgres architecture just makes that
W dniu 30.09.2023 o 07:55, James Healy pisze:
...
We shouldn't have let them get so big, but that's a conversation
for another day.
Some are approaching overflow and we're slowly doing the work to
migrate to bigint. Mostly via the well understood "add a new id_bigint
column, populate on new
On Sun, 1 Oct 2023 at 14:37, Tom Lane wrote:
> I think what you're asking for is a scheme whereby some rows in a
> table have datatype X in a particular column while other rows in
> the very same physical table have datatype Y in the same column.
> That is not happening, because there'd be no way
On 9/30/23 22:37, Tom Lane wrote:
[snip]
especially not a break that adds more per-row overhead.
So really the only way forward for this would be to provide more
automation for the existing conversion processes involving table
rewrites.
When altering an unindexed INT to BIGINT, do all of the
James Healy writes:
> However it doesn't really address the question of a gradual migration
> process that can read 32bit ints but insert/update as 64bit bigints. I
> remain curious about whether the postgres architecture just makes that
> implausible, or if it could be done and just hasn't
On Sun, 1 Oct 2023 at 04:35, Bruce Momjian wrote:
> I think this talk will help you:
>
> https://www.youtube.com/watch?v=XYRgTazYuZ4
Thanks, I hadn't seen that talk and it's a good summary of the issue
and available solutions.
However it doesn't really address the question of a gradual
on new tuples, backfill the old, switch the PK"
> method. The backfill is slow on these large tables, but it works and
> there's plenty of blog posts and documentation to follow.
>
> It did make me curious though: would it be possible for postgres to
> support gradual migratio
On Sat, Sep 30, 2023 at 03:55:20PM +1000, James Healy wrote:
> It did make me curious though: would it be possible for postgres to
> support gradual migration from integer to bigint in a more
> transparent way, where new and updated tuples are written as bigint,
> but existing tuples
t it works and
there's plenty of blog posts and documentation to follow.
It did make me curious though: would it be possible for postgres to
support gradual migration from integer to bigint in a more transparent
way, where new and updated tuples are written as bigint, but existing
tuples can be read
12 matches
Mail list logo